Sep 30 17:15:41 crc systemd[1]: Starting Kubernetes Kubelet... Sep 30 17:15:41 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:41 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 17:15:42 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 30 17:15:43 crc kubenswrapper[4688]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.142508 4688 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.147986 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148022 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148032 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148040 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148048 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148058 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148066 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148074 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148082 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148090 4688 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148098 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148106 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148113 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148124 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148140 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148150 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148159 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148167 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148196 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148204 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148213 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148221 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148233 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148242 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148251 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148258 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148266 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148274 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148282 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148290 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148297 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148306 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148314 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148321 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148329 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148337 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148344 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148352 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148360 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148368 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148378 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148388 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148410 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148418 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148426 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148434 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148442 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148454 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148464 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148474 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148483 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148491 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148500 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148507 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148515 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148522 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148530 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148538 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148545 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148553 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148560 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148590 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148600 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148611 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148620 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148644 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148655 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148664 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148675 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148687 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.148697 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149730 4688 flags.go:64] FLAG: --address="0.0.0.0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149744 4688 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149758 4688 flags.go:64] FLAG: --anonymous-auth="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149764 4688 flags.go:64] FLAG: --application-metrics-count-limit="100" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149769 4688 flags.go:64] FLAG: --authentication-token-webhook="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149773 4688 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149779 4688 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149793 4688 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149797 4688 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149802 4688 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149806 4688 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149810 4688 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149814 4688 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149819 4688 flags.go:64] FLAG: --cgroup-root="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149823 4688 flags.go:64] FLAG: --cgroups-per-qos="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149827 4688 flags.go:64] FLAG: --client-ca-file="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149830 4688 flags.go:64] FLAG: --cloud-config="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149834 4688 flags.go:64] FLAG: --cloud-provider="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149838 4688 flags.go:64] FLAG: --cluster-dns="[]" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149846 4688 flags.go:64] FLAG: --cluster-domain="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149850 4688 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149854 4688 flags.go:64] FLAG: --config-dir="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149857 4688 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149862 4688 flags.go:64] FLAG: --container-log-max-files="5" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149867 4688 flags.go:64] FLAG: --container-log-max-size="10Mi" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149871 4688 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149875 4688 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149880 4688 flags.go:64] FLAG: --containerd-namespace="k8s.io" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149884 4688 flags.go:64] FLAG: --contention-profiling="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149888 4688 flags.go:64] FLAG: --cpu-cfs-quota="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149892 4688 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149896 4688 flags.go:64] FLAG: --cpu-manager-policy="none" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149900 4688 flags.go:64] FLAG: --cpu-manager-policy-options="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149906 4688 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149910 4688 flags.go:64] FLAG: --enable-controller-attach-detach="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149914 4688 flags.go:64] FLAG: --enable-debugging-handlers="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149918 4688 flags.go:64] FLAG: --enable-load-reader="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149922 4688 flags.go:64] FLAG: --enable-server="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149926 4688 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149933 4688 flags.go:64] FLAG: --event-burst="100" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149939 4688 flags.go:64] FLAG: --event-qps="50" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149943 4688 flags.go:64] FLAG: --event-storage-age-limit="default=0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149947 4688 flags.go:64] FLAG: --event-storage-event-limit="default=0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149956 4688 flags.go:64] FLAG: --eviction-hard="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149962 4688 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149966 4688 flags.go:64] FLAG: --eviction-minimum-reclaim="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149970 4688 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149974 4688 flags.go:64] FLAG: --eviction-soft="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149978 4688 flags.go:64] FLAG: --eviction-soft-grace-period="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149983 4688 flags.go:64] FLAG: --exit-on-lock-contention="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149987 4688 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149991 4688 flags.go:64] FLAG: --experimental-mounter-path="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149994 4688 flags.go:64] FLAG: --fail-cgroupv1="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.149998 4688 flags.go:64] FLAG: --fail-swap-on="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150002 4688 flags.go:64] FLAG: --feature-gates="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150008 4688 flags.go:64] FLAG: --file-check-frequency="20s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150012 4688 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150016 4688 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150020 4688 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150024 4688 flags.go:64] FLAG: --healthz-port="10248" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150029 4688 flags.go:64] FLAG: --help="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150033 4688 flags.go:64] FLAG: --hostname-override="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150036 4688 flags.go:64] FLAG: --housekeeping-interval="10s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150041 4688 flags.go:64] FLAG: --http-check-frequency="20s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150045 4688 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150049 4688 flags.go:64] FLAG: --image-credential-provider-config="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150052 4688 flags.go:64] FLAG: --image-gc-high-threshold="85" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150056 4688 flags.go:64] FLAG: --image-gc-low-threshold="80" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150060 4688 flags.go:64] FLAG: --image-service-endpoint="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150064 4688 flags.go:64] FLAG: --kernel-memcg-notification="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150068 4688 flags.go:64] FLAG: --kube-api-burst="100" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150072 4688 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150079 4688 flags.go:64] FLAG: --kube-api-qps="50" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150083 4688 flags.go:64] FLAG: --kube-reserved="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150087 4688 flags.go:64] FLAG: --kube-reserved-cgroup="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150091 4688 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150095 4688 flags.go:64] FLAG: --kubelet-cgroups="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150099 4688 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150103 4688 flags.go:64] FLAG: --lock-file="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150112 4688 flags.go:64] FLAG: --log-cadvisor-usage="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150116 4688 flags.go:64] FLAG: --log-flush-frequency="5s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150121 4688 flags.go:64] FLAG: --log-json-info-buffer-size="0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150127 4688 flags.go:64] FLAG: --log-json-split-stream="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150131 4688 flags.go:64] FLAG: --log-text-info-buffer-size="0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150135 4688 flags.go:64] FLAG: --log-text-split-stream="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150139 4688 flags.go:64] FLAG: --logging-format="text" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150143 4688 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150147 4688 flags.go:64] FLAG: --make-iptables-util-chains="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150151 4688 flags.go:64] FLAG: --manifest-url="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150155 4688 flags.go:64] FLAG: --manifest-url-header="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150161 4688 flags.go:64] FLAG: --max-housekeeping-interval="15s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150166 4688 flags.go:64] FLAG: --max-open-files="1000000" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150171 4688 flags.go:64] FLAG: --max-pods="110" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150175 4688 flags.go:64] FLAG: --maximum-dead-containers="-1" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150179 4688 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150183 4688 flags.go:64] FLAG: --memory-manager-policy="None" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150187 4688 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150191 4688 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150195 4688 flags.go:64] FLAG: --node-ip="192.168.126.11" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150199 4688 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150211 4688 flags.go:64] FLAG: --node-status-max-images="50" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150216 4688 flags.go:64] FLAG: --node-status-update-frequency="10s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150220 4688 flags.go:64] FLAG: --oom-score-adj="-999" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150225 4688 flags.go:64] FLAG: --pod-cidr="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150229 4688 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150243 4688 flags.go:64] FLAG: --pod-manifest-path="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150248 4688 flags.go:64] FLAG: --pod-max-pids="-1" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150252 4688 flags.go:64] FLAG: --pods-per-core="0" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150257 4688 flags.go:64] FLAG: --port="10250" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150261 4688 flags.go:64] FLAG: --protect-kernel-defaults="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150265 4688 flags.go:64] FLAG: --provider-id="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150269 4688 flags.go:64] FLAG: --qos-reserved="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150273 4688 flags.go:64] FLAG: --read-only-port="10255" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150277 4688 flags.go:64] FLAG: --register-node="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150281 4688 flags.go:64] FLAG: --register-schedulable="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150290 4688 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150306 4688 flags.go:64] FLAG: --registry-burst="10" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150310 4688 flags.go:64] FLAG: --registry-qps="5" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150314 4688 flags.go:64] FLAG: --reserved-cpus="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150318 4688 flags.go:64] FLAG: --reserved-memory="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150324 4688 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150328 4688 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150332 4688 flags.go:64] FLAG: --rotate-certificates="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150336 4688 flags.go:64] FLAG: --rotate-server-certificates="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150340 4688 flags.go:64] FLAG: --runonce="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150344 4688 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150348 4688 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150353 4688 flags.go:64] FLAG: --seccomp-default="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150357 4688 flags.go:64] FLAG: --serialize-image-pulls="true" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150361 4688 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150365 4688 flags.go:64] FLAG: --storage-driver-db="cadvisor" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150369 4688 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150373 4688 flags.go:64] FLAG: --storage-driver-password="root" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150377 4688 flags.go:64] FLAG: --storage-driver-secure="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150381 4688 flags.go:64] FLAG: --storage-driver-table="stats" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150385 4688 flags.go:64] FLAG: --storage-driver-user="root" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150389 4688 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150393 4688 flags.go:64] FLAG: --sync-frequency="1m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150398 4688 flags.go:64] FLAG: --system-cgroups="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150401 4688 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150408 4688 flags.go:64] FLAG: --system-reserved-cgroup="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150412 4688 flags.go:64] FLAG: --tls-cert-file="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150416 4688 flags.go:64] FLAG: --tls-cipher-suites="[]" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150424 4688 flags.go:64] FLAG: --tls-min-version="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150428 4688 flags.go:64] FLAG: --tls-private-key-file="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150432 4688 flags.go:64] FLAG: --topology-manager-policy="none" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150436 4688 flags.go:64] FLAG: --topology-manager-policy-options="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150440 4688 flags.go:64] FLAG: --topology-manager-scope="container" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150445 4688 flags.go:64] FLAG: --v="2" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150451 4688 flags.go:64] FLAG: --version="false" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150457 4688 flags.go:64] FLAG: --vmodule="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150467 4688 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150472 4688 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150630 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150636 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150640 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150644 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150648 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150652 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150655 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150659 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150663 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150668 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150672 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150677 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150682 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150685 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150689 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150693 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150697 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150700 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150703 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150707 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150711 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150714 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150717 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150721 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150724 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150728 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150732 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150735 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150739 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150742 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150746 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150751 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150755 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150759 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150768 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150772 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150776 4688 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150780 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150783 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150787 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150790 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150794 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150797 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150801 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150804 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150807 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150811 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150815 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150819 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150823 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150826 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150830 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150833 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150837 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150840 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150844 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150847 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150851 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150854 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150858 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150861 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150865 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150868 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150872 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150875 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150880 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150884 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150888 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150892 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150897 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.150905 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.150917 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.164534 4688 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.164652 4688 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164841 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164877 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164893 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164908 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164924 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164943 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164956 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164970 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164983 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.164995 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165008 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165019 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165030 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165041 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165051 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165061 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165071 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165081 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165090 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165099 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165112 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165122 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165132 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165145 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165156 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165168 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165178 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165188 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165198 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165209 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165218 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165229 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165238 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165249 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165260 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165271 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165282 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165292 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165303 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165317 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165331 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165344 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165354 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165362 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165371 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165379 4688 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165387 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165395 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165402 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165637 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165647 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165655 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165663 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165707 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165714 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165723 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165731 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165740 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165748 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165756 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165765 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165775 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165786 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165796 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165805 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165814 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165822 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165830 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165839 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165847 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.165855 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.165870 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166126 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166147 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166158 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166170 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166178 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166187 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166195 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166203 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166214 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166227 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166235 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166245 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166254 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166263 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166271 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166279 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166287 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166296 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166305 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166314 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166323 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166331 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166338 4688 feature_gate.go:330] unrecognized feature gate: Example Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166346 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166355 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166363 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166371 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166378 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166386 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166397 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166405 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166413 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166421 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166428 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166436 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166444 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166452 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166465 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166475 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166485 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166493 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166502 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166510 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166518 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166525 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166533 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166541 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166549 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166557 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166565 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166604 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166611 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166619 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166627 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166635 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166646 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166656 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166668 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166679 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166690 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166700 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166711 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166721 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166731 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166743 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166757 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166770 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166780 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166788 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166797 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.166807 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.166822 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.169359 4688 server.go:940] "Client rotation is on, will bootstrap in background" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.176433 4688 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.176621 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.178755 4688 server.go:997] "Starting client certificate rotation" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.178812 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.179180 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 13:55:58.480028715 +0000 UTC Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.179324 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1004h40m15.300710225s for next certificate rotation Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.209555 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.212642 4688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.233096 4688 log.go:25] "Validated CRI v1 runtime API" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.279630 4688 log.go:25] "Validated CRI v1 image API" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.282349 4688 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.290453 4688 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-09-30-17-10-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.290502 4688 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.320790 4688 manager.go:217] Machine: {Timestamp:2025-09-30 17:15:43.316689722 +0000 UTC m=+0.612127310 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:88424a46-ac45-4c7d-88cf-12b6be491796 BootID:ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7c:41:22 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7c:41:22 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a6:f3:60 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c2:a1:88 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:68:61:3d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e1:28:56 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:d2:22:bb:2f:32 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6e:2b:e5:73:1f:cf Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.321053 4688 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.321292 4688 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.321789 4688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.321964 4688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.322000 4688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.322224 4688 topology_manager.go:138] "Creating topology manager with none policy" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.322236 4688 container_manager_linux.go:303] "Creating device plugin manager" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.322739 4688 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.322771 4688 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.324708 4688 state_mem.go:36] "Initialized new in-memory state store" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.325106 4688 server.go:1245] "Using root directory" path="/var/lib/kubelet" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.329045 4688 kubelet.go:418] "Attempting to sync node with API server" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.329070 4688 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.329094 4688 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.329108 4688 kubelet.go:324] "Adding apiserver pod source" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.329121 4688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.333316 4688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.334638 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.336100 4688 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338443 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338667 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338720 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338745 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338794 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338811 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338825 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338851 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338870 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338891 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338916 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.338932 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.341452 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.341581 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.341452 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.341880 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.342700 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.343822 4688 server.go:1280] "Started kubelet" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.343943 4688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.344192 4688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.345040 4688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.346301 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:43 crc systemd[1]: Started Kubernetes Kubelet. Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349126 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349190 4688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349244 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:42:30.611221457 +0000 UTC Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349340 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 897h26m47.261883985s for next certificate rotation Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.349654 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349730 4688 volume_manager.go:287] "The desired_state_of_world populator starts" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349749 4688 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.349892 4688 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.350793 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.350936 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.353236 4688 server.go:460] "Adding debug handlers to kubelet server" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.354351 4688 factory.go:153] Registering CRI-O factory Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.355147 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.192:6443: connect: connection refused" interval="200ms" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356034 4688 factory.go:221] Registration of the crio container factory successfully Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356169 4688 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356673 4688 factory.go:55] Registering systemd factory Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356688 4688 factory.go:221] Registration of the systemd container factory successfully Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356728 4688 factory.go:103] Registering Raw factory Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.356753 4688 manager.go:1196] Started watching for new ooms in manager Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.358345 4688 manager.go:319] Starting recovery of all containers Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.354438 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186a1ede0432e94b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-30 17:15:43.343618379 +0000 UTC m=+0.639055967,LastTimestamp:2025-09-30 17:15:43.343618379 +0000 UTC m=+0.639055967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371381 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371477 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371505 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371531 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371555 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371639 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371667 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371688 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371719 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371745 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371773 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371799 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371852 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371884 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371912 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371949 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371972 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.371998 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372024 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372050 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372193 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372220 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372246 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372275 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372306 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372370 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372435 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372465 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372492 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372520 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372550 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372605 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372629 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372647 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372666 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372684 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372714 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372733 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372753 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372771 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372792 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372812 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372831 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372853 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372872 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372893 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372912 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372934 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372961 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.372982 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373000 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373018 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373050 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373075 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373097 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373164 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373187 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373208 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373228 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373248 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373268 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373286 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373304 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373323 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373343 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373363 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373380 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373400 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373419 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373438 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373455 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373473 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373490 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373507 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373561 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373602 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373620 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373645 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373665 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373685 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373703 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373719 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373740 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373757 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373775 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373795 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373815 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373834 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373853 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373871 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373890 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373908 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373929 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373949 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373967 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.373986 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374008 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374027 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374047 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374067 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374086 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374104 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374125 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374146 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374174 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374195 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374218 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374241 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374264 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374286 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374307 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374393 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374413 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374433 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374455 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374473 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374493 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374512 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374531 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374549 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374592 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374613 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374633 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374651 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374682 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374702 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374722 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374743 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374763 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374782 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374803 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374822 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374840 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374858 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374876 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374894 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374912 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374932 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374951 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374968 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.374987 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375007 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375023 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375050 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375076 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375099 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375133 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375154 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375187 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375222 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375246 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375273 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375296 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375320 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375345 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375368 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375394 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375425 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375454 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375483 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375509 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375533 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375557 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375687 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375727 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375747 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375823 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375857 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375876 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375898 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375917 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375951 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.375985 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376004 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376024 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376046 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376065 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376082 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376101 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376128 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376180 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376209 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.376259 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380278 4688 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380367 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380400 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380425 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380455 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380476 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380496 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380514 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380537 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380558 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380585 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380600 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380622 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380637 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380657 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380673 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380692 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380707 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380723 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380739 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380760 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380775 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380788 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.380804 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.381278 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.381290 4688 reconstruct.go:97] "Volume reconstruction finished" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.381297 4688 reconciler.go:26] "Reconciler: start to sync state" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.386610 4688 manager.go:324] Recovery completed Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.395587 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.399109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.399144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.399160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.402824 4688 cpu_manager.go:225] "Starting CPU manager" policy="none" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.402858 4688 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.402881 4688 state_mem.go:36] "Initialized new in-memory state store" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.425529 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.427901 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.427965 4688 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.428011 4688 kubelet.go:2335] "Starting kubelet main sync loop" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.428091 4688 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 30 17:15:43 crc kubenswrapper[4688]: W0930 17:15:43.429457 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.429516 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.450332 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.460230 4688 policy_none.go:49] "None policy: Start" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.463854 4688 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.463932 4688 state_mem.go:35] "Initializing new in-memory state store" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.528378 4688 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.550657 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.553873 4688 manager.go:334] "Starting Device Plugin manager" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.554476 4688 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.554503 4688 server.go:79] "Starting device plugin registration server" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.554961 4688 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.554989 4688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.555717 4688 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.555799 4688 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.555808 4688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.555839 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.192:6443: connect: connection refused" interval="400ms" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.564458 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.655765 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.658147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.658209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.658227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.658264 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.658940 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.192:6443: connect: connection refused" node="crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.728911 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.729037 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.730663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.730706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.730720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.730902 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.731824 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.731915 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.732957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733347 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733644 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.733737 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734349 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734925 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.734966 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735354 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.735488 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736411 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736424 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736555 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.736420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.737391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.737414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.737970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.738067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.738124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786239 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786301 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786384 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786419 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786450 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786536 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786769 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786812 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786875 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.786978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.787055 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.859497 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.861564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.861619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.861632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.861662 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.862097 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.192:6443: connect: connection refused" node="crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888392 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888448 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888475 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888507 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888606 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888718 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888735 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.888778 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889172 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889257 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889522 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889551 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889624 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889657 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889685 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889714 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: I0930 17:15:43.889744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:43 crc kubenswrapper[4688]: E0930 17:15:43.957412 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.192:6443: connect: connection refused" interval="800ms" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.065890 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.083437 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.096211 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.120756 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.122402 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b175ff7f60ae69fe42891796b330f4e74d53b77d9dc33799a88a4e58351961f0 WatchSource:0}: Error finding container b175ff7f60ae69fe42891796b330f4e74d53b77d9dc33799a88a4e58351961f0: Status 404 returned error can't find the container with id b175ff7f60ae69fe42891796b330f4e74d53b77d9dc33799a88a4e58351961f0 Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.122747 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e17cf90d610343d7a98235230ed8de3953f338b889f52293ac0f6032b553debb WatchSource:0}: Error finding container e17cf90d610343d7a98235230ed8de3953f338b889f52293ac0f6032b553debb: Status 404 returned error can't find the container with id e17cf90d610343d7a98235230ed8de3953f338b889f52293ac0f6032b553debb Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.128391 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-21823a8779facf68c609ad1be02eac2d1e133ff2aae4041d8437e630a29b757c WatchSource:0}: Error finding container 21823a8779facf68c609ad1be02eac2d1e133ff2aae4041d8437e630a29b757c: Status 404 returned error can't find the container with id 21823a8779facf68c609ad1be02eac2d1e133ff2aae4041d8437e630a29b757c Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.130847 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.136346 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a862988aedbff4026bd9b3cfc4bc02628154c66807d0c09567735d3ac5aaa939 WatchSource:0}: Error finding container a862988aedbff4026bd9b3cfc4bc02628154c66807d0c09567735d3ac5aaa939: Status 404 returned error can't find the container with id a862988aedbff4026bd9b3cfc4bc02628154c66807d0c09567735d3ac5aaa939 Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.158079 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-dcd8a6db79f889574be937decb3296f7c0468b47c15687ef96c6341e839595d2 WatchSource:0}: Error finding container dcd8a6db79f889574be937decb3296f7c0468b47c15687ef96c6341e839595d2: Status 404 returned error can't find the container with id dcd8a6db79f889574be937decb3296f7c0468b47c15687ef96c6341e839595d2 Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.262495 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.264230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.264287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.264302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.264344 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.264891 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.192:6443: connect: connection refused" node="crc" Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.302109 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.302211 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.327958 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.328032 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.347351 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.433466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21823a8779facf68c609ad1be02eac2d1e133ff2aae4041d8437e630a29b757c"} Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.434908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e17cf90d610343d7a98235230ed8de3953f338b889f52293ac0f6032b553debb"} Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.435779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b175ff7f60ae69fe42891796b330f4e74d53b77d9dc33799a88a4e58351961f0"} Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.437082 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dcd8a6db79f889574be937decb3296f7c0468b47c15687ef96c6341e839595d2"} Sep 30 17:15:44 crc kubenswrapper[4688]: I0930 17:15:44.438035 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a862988aedbff4026bd9b3cfc4bc02628154c66807d0c09567735d3ac5aaa939"} Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.453289 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.453374 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:44 crc kubenswrapper[4688]: W0930 17:15:44.665916 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.666341 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:44 crc kubenswrapper[4688]: E0930 17:15:44.758519 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.192:6443: connect: connection refused" interval="1.6s" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.065416 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.067896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.067946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.067958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.067993 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:45 crc kubenswrapper[4688]: E0930 17:15:45.068759 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.192:6443: connect: connection refused" node="crc" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.347732 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.444820 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="234d46868e3718ca21398d00dc15a6ff942c4470db18dc85eef305bd1d7081f5" exitCode=0 Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.444944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"234d46868e3718ca21398d00dc15a6ff942c4470db18dc85eef305bd1d7081f5"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.445113 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.446453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.446509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.446527 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.448006 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f4165f08b0c37f20fa78a8c680933ec886fe2e509fefc3a0e5dc235e86da1782" exitCode=0 Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.448125 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f4165f08b0c37f20fa78a8c680933ec886fe2e509fefc3a0e5dc235e86da1782"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.448187 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.449991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.450054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.450074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.451482 4688 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c" exitCode=0 Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.451608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.451658 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.452902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.452933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.452948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.456874 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.456937 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.456953 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.459220 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33" exitCode=0 Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.459274 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33"} Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.459383 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.460987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.461067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.461086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.468054 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.469115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.469171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:45 crc kubenswrapper[4688]: I0930 17:15:45.469189 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.347272 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:46 crc kubenswrapper[4688]: E0930 17:15:46.359301 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.192:6443: connect: connection refused" interval="3.2s" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.469028 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.469096 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.469119 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.469258 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.473095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.473146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.473157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.477738 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.477847 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.479234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.479268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.479279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.484290 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.484322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.484335 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.484384 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.486377 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cf303a1404acb01072341f0d5da725852ee2b91ad897523d82519cc18adbc4a6" exitCode=0 Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.486592 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.486886 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cf303a1404acb01072341f0d5da725852ee2b91ad897523d82519cc18adbc4a6"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.488132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.488162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.488177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.495557 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.495925 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"71050bd4365f2cc10ad82bf182ecf86dc17712a6c355674ff4fa09894810a022"} Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.496050 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.497169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.497209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.497220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.669715 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.671104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.671230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.671253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:46 crc kubenswrapper[4688]: I0930 17:15:46.671306 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:46 crc kubenswrapper[4688]: E0930 17:15:46.671939 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.192:6443: connect: connection refused" node="crc" Sep 30 17:15:46 crc kubenswrapper[4688]: E0930 17:15:46.929208 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186a1ede0432e94b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-30 17:15:43.343618379 +0000 UTC m=+0.639055967,LastTimestamp:2025-09-30 17:15:43.343618379 +0000 UTC m=+0.639055967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 30 17:15:46 crc kubenswrapper[4688]: W0930 17:15:46.983209 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:46 crc kubenswrapper[4688]: E0930 17:15:46.983831 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:47 crc kubenswrapper[4688]: W0930 17:15:47.063339 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:47 crc kubenswrapper[4688]: E0930 17:15:47.063879 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:47 crc kubenswrapper[4688]: W0930 17:15:47.068855 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:47 crc kubenswrapper[4688]: E0930 17:15:47.069109 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.201559 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:47 crc kubenswrapper[4688]: W0930 17:15:47.208938 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.192:6443: connect: connection refused Sep 30 17:15:47 crc kubenswrapper[4688]: E0930 17:15:47.209151 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.192:6443: connect: connection refused" logger="UnhandledError" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.210453 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.502853 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd"} Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.503893 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.505240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.505327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.505384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512329 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ebb581c632d1acfbbf9798b41320308b1dfa5335ded336febd15a41b4abfd303" exitCode=0 Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512476 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512514 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512510 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ebb581c632d1acfbbf9798b41320308b1dfa5335ded336febd15a41b4abfd303"} Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512596 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512618 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.512520 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514877 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.515010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.514963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:47 crc kubenswrapper[4688]: I0930 17:15:47.515107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.013931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.520655 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.521165 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.520647 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"090cf91bf69dedaa3104e1a443b39aef6e23b4e56b37c1c77e2bb11f6356e046"} Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.520841 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.521341 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aed6264eedcb6189465760eaf60977203d6834e36bef8d61ebc1b23104e8b133"} Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.521388 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c49443d15de2f440bf7ddd2a4ef166ce9d413ace84d29936e304898c854f771"} Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.521408 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b505f496459e7f54615e6bc25fda1070ea7d9e1c797327b30351b6b6eea8bd45"} Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:48 crc kubenswrapper[4688]: I0930 17:15:48.522780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.482886 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.501463 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.501777 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.503609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.503677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.503697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.529963 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"54547938220f9f777ec52faf1fdaeeb94cdcc39c6bffedb9e32867a46a34be8f"} Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.530048 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.530821 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.530218 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.530342 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.532872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.873040 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.875019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.875103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.875123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:49 crc kubenswrapper[4688]: I0930 17:15:49.875169 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:15:50 crc kubenswrapper[4688]: I0930 17:15:50.533561 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:50 crc kubenswrapper[4688]: I0930 17:15:50.535336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:50 crc kubenswrapper[4688]: I0930 17:15:50.535510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:50 crc kubenswrapper[4688]: I0930 17:15:50.535541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:51 crc kubenswrapper[4688]: I0930 17:15:51.334912 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Sep 30 17:15:51 crc kubenswrapper[4688]: I0930 17:15:51.537266 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:51 crc kubenswrapper[4688]: I0930 17:15:51.539155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:51 crc kubenswrapper[4688]: I0930 17:15:51.539267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:51 crc kubenswrapper[4688]: I0930 17:15:51.539292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.396075 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.396313 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.396363 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.398053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.398122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.398148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.764476 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.764796 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.766537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.766672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.766704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.959263 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.959593 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.961248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.961322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:52 crc kubenswrapper[4688]: I0930 17:15:52.961348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:53 crc kubenswrapper[4688]: E0930 17:15:53.564696 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 17:15:53 crc kubenswrapper[4688]: I0930 17:15:53.697715 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:53 crc kubenswrapper[4688]: I0930 17:15:53.698061 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:53 crc kubenswrapper[4688]: I0930 17:15:53.700045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:53 crc kubenswrapper[4688]: I0930 17:15:53.700103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:53 crc kubenswrapper[4688]: I0930 17:15:53.700123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:55 crc kubenswrapper[4688]: I0930 17:15:55.958757 4688 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 17:15:55 crc kubenswrapper[4688]: I0930 17:15:55.958943 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.348202 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.557146 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.559705 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd" exitCode=255 Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.559796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd"} Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.560043 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.561291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.561329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.561342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.565920 4688 scope.go:117] "RemoveContainer" containerID="03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.963182 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.963280 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.968538 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 17:15:57 crc kubenswrapper[4688]: I0930 17:15:57.968660 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.023980 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.024132 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.025383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.025412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.025423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.564919 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.567058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394"} Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.567287 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.568163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.568210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:58 crc kubenswrapper[4688]: I0930 17:15:58.568219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.490346 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.570151 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.570229 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.571867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.571929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.571947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:15:59 crc kubenswrapper[4688]: I0930 17:15:59.577316 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:16:00 crc kubenswrapper[4688]: I0930 17:16:00.573110 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:16:00 crc kubenswrapper[4688]: I0930 17:16:00.574510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:00 crc kubenswrapper[4688]: I0930 17:16:00.574598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:00 crc kubenswrapper[4688]: I0930 17:16:00.574621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.368952 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.369253 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.370762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.370800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.370814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.386777 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.575624 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.575624 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.576524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.576555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.576587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.577616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.577659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:01 crc kubenswrapper[4688]: I0930 17:16:01.577670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:02 crc kubenswrapper[4688]: E0930 17:16:02.963520 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.967545 4688 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.968258 4688 trace.go:236] Trace[1820205402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:15:52.893) (total time: 10074ms): Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1820205402]: ---"Objects listed" error: 10074ms (17:16:02.968) Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1820205402]: [10.074963226s] [10.074963226s] END Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.968276 4688 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.969344 4688 trace.go:236] Trace[1459332460]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:15:51.314) (total time: 11654ms): Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1459332460]: ---"Objects listed" error: 11654ms (17:16:02.969) Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1459332460]: [11.654993012s] [11.654993012s] END Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.969390 4688 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.969963 4688 trace.go:236] Trace[1484928135]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 17:15:50.895) (total time: 12074ms): Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1484928135]: ---"Objects listed" error: 12074ms (17:16:02.969) Sep 30 17:16:02 crc kubenswrapper[4688]: Trace[1484928135]: [12.074358883s] [12.074358883s] END Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.970011 4688 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Sep 30 17:16:02 crc kubenswrapper[4688]: I0930 17:16:02.970104 4688 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Sep 30 17:16:02 crc kubenswrapper[4688]: E0930 17:16:02.971282 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.027151 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.031545 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.340515 4688 apiserver.go:52] "Watching apiserver" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.343497 4688 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.344154 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-82htv","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.344530 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.344653 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.344710 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.344729 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.344861 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.345096 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.345142 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.345489 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.345771 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.345847 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.347843 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.348229 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.348453 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.349360 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.349466 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.349612 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.349670 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.349796 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.350092 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.350533 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.351230 4688 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.351435 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.352499 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.365409 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.373257 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.373346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.373401 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.373434 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.373810 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.374281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.375397 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.375437 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.375491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.375523 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.376639 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377200 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.375702 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.376559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377102 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377221 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377629 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377651 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377672 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377729 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377750 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377773 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377797 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377847 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377871 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377893 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.377915 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378434 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378451 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378503 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378716 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378827 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378847 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.378968 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379094 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379111 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379279 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379705 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379734 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379757 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379780 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379802 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379824 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379847 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379869 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379891 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379934 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379958 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.379981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.380297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.380454 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.380467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.380675 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.380898 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.381136 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.381499 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.382107 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383147 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383709 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383782 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383805 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383828 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383850 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383878 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383928 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383961 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.383992 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384012 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384037 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384059 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384080 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384105 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384128 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384126 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384152 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384183 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384243 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384305 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384332 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384397 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384426 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384460 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384475 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384523 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384597 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384621 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384653 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384681 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384712 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384740 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384758 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384766 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384793 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384814 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384865 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384878 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384903 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384938 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384962 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.384986 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385033 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385120 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385124 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385143 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385166 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385189 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385210 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385233 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385275 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385298 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385321 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385369 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385412 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385436 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385459 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385543 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385612 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385637 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385649 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385727 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385748 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385768 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385788 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385812 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385971 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.385992 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386008 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386026 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386043 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386059 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386097 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386133 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386189 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386209 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386232 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386252 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386268 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386286 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386327 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386367 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386383 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386402 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386437 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386456 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386478 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386530 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386545 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386580 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386613 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386689 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386723 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386740 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386758 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386775 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386795 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386817 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386833 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386849 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386868 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386965 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386984 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387001 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387023 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387040 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387058 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387078 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387094 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387112 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387131 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387147 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387182 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387204 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387238 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387259 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387302 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387321 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387339 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387358 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387374 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387391 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387406 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387439 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387455 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387472 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387507 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387525 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387543 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387561 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387611 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387628 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387656 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387726 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387745 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387765 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387782 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387800 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387822 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387843 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3f10831-407a-4e6c-adb7-014837e056bd-hosts-file\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387881 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj7t5\" (UniqueName: \"kubernetes.io/projected/e3f10831-407a-4e6c-adb7-014837e056bd-kube-api-access-nj7t5\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387917 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387934 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387951 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.387985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388025 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388037 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388047 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388057 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388068 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388079 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388089 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388098 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388107 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388117 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388126 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388136 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388144 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388154 4688 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388164 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388173 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388182 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388192 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388201 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388212 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388242 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388254 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388265 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388278 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388288 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388297 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388307 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388319 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388328 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388339 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388350 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388359 4688 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388369 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388378 4688 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388387 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388738 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386229 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394164 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394116 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.386429 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388564 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388727 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.388740 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389168 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389253 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389393 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389419 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389628 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389697 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.389988 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390060 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390208 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390475 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390492 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390499 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390654 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390658 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390727 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.390678 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.391727 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:03.891687621 +0000 UTC m=+21.187125189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.391977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.391972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.391989 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392025 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392199 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392275 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392434 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392445 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392653 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.392966 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393220 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393293 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393500 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395387 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395411 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395344 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395606 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395694 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395904 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395912 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.395943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396062 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396093 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396132 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396221 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396290 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393862 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393948 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394105 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394245 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.394390 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396439 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396451 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396719 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.396952 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397614 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397856 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397905 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.397905 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398234 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398299 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398388 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398409 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398485 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.398608 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.399349 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.399411 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.399486 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.399555 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.399797 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400039 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400074 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400141 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.400163 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400189 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400262 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.400277 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:03.900240883 +0000 UTC m=+21.195678441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.400832 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.401057 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.401304 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.393864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.401644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402486 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402505 4688 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402704 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402877 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402899 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402986 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403014 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.402976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403103 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403160 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403274 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403473 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.403546 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.404038 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.404124 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.404512 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.404561 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.404645 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.404930 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.405330 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:03.905306305 +0000 UTC m=+21.200743863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405631 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405737 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405832 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405877 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405668 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.405927 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.406436 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.407047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.407377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.407682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.408117 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.408180 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.408209 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.408719 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.409086 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.409190 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.409368 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.408869 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.410365 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.410511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.410828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.411520 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.411764 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.420621 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.421055 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.421082 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.421100 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.421177 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:03.921148876 +0000 UTC m=+21.216586444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.421615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.423004 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.423023 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.423035 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.423078 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:03.923065816 +0000 UTC m=+21.218503384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.428329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.430270 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.431758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.443313 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.443484 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.443686 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.443793 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.444678 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.445013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.447955 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.448374 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.449004 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.449334 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.449709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.452930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.453538 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.453852 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.454064 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.457480 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.458311 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.469941 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.472398 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.475052 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.475145 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.476026 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.476115 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.476187 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.476307 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.476378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.478141 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.478240 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.478748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.479085 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.491960 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492210 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3f10831-407a-4e6c-adb7-014837e056bd-hosts-file\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492246 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj7t5\" (UniqueName: \"kubernetes.io/projected/e3f10831-407a-4e6c-adb7-014837e056bd-kube-api-access-nj7t5\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492262 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492323 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492334 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492343 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492352 4688 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492362 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492371 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492380 4688 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492390 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492399 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492406 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492415 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492424 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492434 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492444 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492453 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492462 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492471 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492482 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492490 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492499 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492509 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492518 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492527 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492535 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492544 4688 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492553 4688 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492562 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492585 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492593 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492602 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492611 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492620 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492628 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492637 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492647 4688 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492656 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492667 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492667 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492675 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492705 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492719 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492732 4688 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492745 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492758 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492771 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492783 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492795 4688 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492808 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492821 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492833 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492845 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492857 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492870 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492882 4688 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492895 4688 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492907 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492921 4688 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492934 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492973 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.492987 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493000 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493012 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493024 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493036 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493048 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493060 4688 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493072 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493084 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493096 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493108 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493120 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493132 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493143 4688 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493156 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493173 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493188 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493204 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493218 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493233 4688 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493250 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493265 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493280 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493297 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493313 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493328 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493343 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493358 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493372 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493386 4688 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493401 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493414 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493429 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493444 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493459 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493474 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493490 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493504 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493515 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493526 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493538 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493550 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493564 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493597 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493610 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493622 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493635 4688 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493647 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493658 4688 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493671 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493682 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493694 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493705 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493717 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493728 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493740 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493752 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493764 4688 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493775 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493787 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493798 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493810 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493821 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493833 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493845 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493858 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493872 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493886 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493898 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493910 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493921 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493934 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493945 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493958 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493971 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493983 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493995 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494006 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494020 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494035 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494049 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494065 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494079 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494093 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494108 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494122 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494135 4688 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494147 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494159 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494172 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494184 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494197 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494208 4688 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494219 4688 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494231 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494243 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494257 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.494269 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493279 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.493897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e3f10831-407a-4e6c-adb7-014837e056bd-hosts-file\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.526150 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj7t5\" (UniqueName: \"kubernetes.io/projected/e3f10831-407a-4e6c-adb7-014837e056bd-kube-api-access-nj7t5\") pod \"node-resolver-82htv\" (UID: \"e3f10831-407a-4e6c-adb7-014837e056bd\") " pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.528865 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.536121 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.544693 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.548314 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.553815 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.573873 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.591073 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.591208 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.592248 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.592936 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.593324 4688 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.627300 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.637634 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.641877 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.642509 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.643399 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.647197 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.647849 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.648374 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.649131 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.649512 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.651499 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.651760 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.652365 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.652846 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.653461 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.654032 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.654488 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.655166 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.658984 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-82htv" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.661493 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.667987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.673765 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.679312 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.680937 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.688629 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.689066 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.695159 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.709219 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.710531 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.712370 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.713265 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.714225 4688 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.715031 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.715197 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.719407 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.720271 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.721967 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.725046 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.726055 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.727589 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.728854 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.729996 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.730224 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.730728 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.732286 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.732989 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.735330 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.735968 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.737486 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.738072 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.739895 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.741552 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.743467 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.743989 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.744558 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.747364 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.747958 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.749100 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9wbfw"] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.749440 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jggn4"] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.749712 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.749725 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-28cnb"] Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.749830 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.752742 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.758177 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.759742 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.759760 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760081 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760225 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760477 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760658 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760496 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760546 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.760931 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.763270 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.767070 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.773770 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.785996 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.795902 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.795946 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-multus\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.795983 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-proxy-tls\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796000 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-cni-binary-copy\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-bin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796030 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-conf-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkpg\" (UniqueName: \"kubernetes.io/projected/03cb23bc-dcc1-46e7-b238-67a6163f456d-kube-api-access-rqkpg\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-k8s-cni-cncf-io\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-cnibin\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-netns\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-etc-kubernetes\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-binary-copy\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-socket-dir-parent\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-multus-certs\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796192 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-system-cni-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-kubelet\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796297 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-hostroot\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-rootfs\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796362 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-system-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796384 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-os-release\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796432 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-daemon-config\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796491 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-822nf\" (UniqueName: \"kubernetes.io/projected/826a3e0f-0f37-4659-a649-b0ce04c8b890-kube-api-access-822nf\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796543 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-os-release\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796590 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796639 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-tuning-conf-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796668 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796693 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c9h\" (UniqueName: \"kubernetes.io/projected/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-kube-api-access-w6c9h\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796746 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-cnibin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796805 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.796854 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:16:03 crc kubenswrapper[4688]: W0930 17:16:03.804746 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-781f3817c8747b87403684f5cacd3211cdc54457eabb8ed2b68298311febd0aa WatchSource:0}: Error finding container 781f3817c8747b87403684f5cacd3211cdc54457eabb8ed2b68298311febd0aa: Status 404 returned error can't find the container with id 781f3817c8747b87403684f5cacd3211cdc54457eabb8ed2b68298311febd0aa Sep 30 17:16:03 crc kubenswrapper[4688]: W0930 17:16:03.807697 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2e0f63d7ebe35fbc21b6344b97478648e6ac088cd3419251986003c67ad5ae34 WatchSource:0}: Error finding container 2e0f63d7ebe35fbc21b6344b97478648e6ac088cd3419251986003c67ad5ae34: Status 404 returned error can't find the container with id 2e0f63d7ebe35fbc21b6344b97478648e6ac088cd3419251986003c67ad5ae34 Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.820470 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.831280 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.843943 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.861242 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.881224 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899249 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899375 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-822nf\" (UniqueName: \"kubernetes.io/projected/826a3e0f-0f37-4659-a649-b0ce04c8b890-kube-api-access-822nf\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-os-release\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899441 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899466 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c9h\" (UniqueName: \"kubernetes.io/projected/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-kube-api-access-w6c9h\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899494 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-tuning-conf-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899517 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-cnibin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899610 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-multus\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-proxy-tls\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899659 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-cni-binary-copy\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-bin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-multus\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900005 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-cni-bin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.899702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-conf-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkpg\" (UniqueName: \"kubernetes.io/projected/03cb23bc-dcc1-46e7-b238-67a6163f456d-kube-api-access-rqkpg\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-cnibin\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: E0930 17:16:03.900116 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:04.900086065 +0000 UTC m=+22.195523713 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900159 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-k8s-cni-cncf-io\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-conf-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900220 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-cnibin\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-k8s-cni-cncf-io\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-netns\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900294 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-etc-kubernetes\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900316 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-netns\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-binary-copy\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-socket-dir-parent\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900294 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-cnibin\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900366 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-tuning-conf-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-multus-certs\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900413 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-run-multus-certs\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900417 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-etc-kubernetes\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-system-cni-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900457 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-kubelet\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-hostroot\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900486 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-socket-dir-parent\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-rootfs\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900520 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-host-var-lib-kubelet\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-system-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900551 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-os-release\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900552 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-system-cni-dir\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900592 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-daemon-config\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900609 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-rootfs\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900622 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-hostroot\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-system-cni-dir\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900850 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/03cb23bc-dcc1-46e7-b238-67a6163f456d-os-release\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.900965 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.901190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/826a3e0f-0f37-4659-a649-b0ce04c8b890-cni-binary-copy\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.901263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/826a3e0f-0f37-4659-a649-b0ce04c8b890-os-release\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.901479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-multus-daemon-config\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.901511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/03cb23bc-dcc1-46e7-b238-67a6163f456d-cni-binary-copy\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.906447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-proxy-tls\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.907187 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.924152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-822nf\" (UniqueName: \"kubernetes.io/projected/826a3e0f-0f37-4659-a649-b0ce04c8b890-kube-api-access-822nf\") pod \"multus-additional-cni-plugins-28cnb\" (UID: \"826a3e0f-0f37-4659-a649-b0ce04c8b890\") " pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.924373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c9h\" (UniqueName: \"kubernetes.io/projected/f607cccb-8e9d-47ff-8c93-6f4a791fa36b-kube-api-access-w6c9h\") pod \"machine-config-daemon-jggn4\" (UID: \"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\") " pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.927435 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkpg\" (UniqueName: \"kubernetes.io/projected/03cb23bc-dcc1-46e7-b238-67a6163f456d-kube-api-access-rqkpg\") pod \"multus-9wbfw\" (UID: \"03cb23bc-dcc1-46e7-b238-67a6163f456d\") " pod="openshift-multus/multus-9wbfw" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.931875 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.953651 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.977964 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:03 crc kubenswrapper[4688]: I0930 17:16:03.992340 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.002746 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.002805 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.002837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.002863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003042 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003073 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003088 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003157 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:05.003132261 +0000 UTC m=+22.298569819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003495 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003617 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:05.003591853 +0000 UTC m=+22.299029411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003666 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003708 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:05.003696746 +0000 UTC m=+22.299134424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003718 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003735 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003749 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.003774 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:05.003767558 +0000 UTC m=+22.299205106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.011899 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.020402 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rwctq"] Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.021216 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.025769 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.025903 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.025999 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.025997 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.026085 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.026227 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.026269 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.026285 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.052317 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.068644 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.079027 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.088023 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9wbfw" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.091097 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.095633 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-28cnb" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.099471 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: W0930 17:16:04.102915 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cb23bc_dcc1_46e7_b238_67a6163f456d.slice/crio-69a8cd87368a389a12c22d3a17105bdde6f46340261a21262576cd303903d85b WatchSource:0}: Error finding container 69a8cd87368a389a12c22d3a17105bdde6f46340261a21262576cd303903d85b: Status 404 returned error can't find the container with id 69a8cd87368a389a12c22d3a17105bdde6f46340261a21262576cd303903d85b Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104690 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104782 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104847 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104880 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.104945 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jsc\" (UniqueName: \"kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105175 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105208 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105236 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105275 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105309 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105380 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105405 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105431 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105470 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.105511 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.108185 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.121785 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: W0930 17:16:04.123981 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod826a3e0f_0f37_4659_a649_b0ce04c8b890.slice/crio-66474d1fafcf617722929f45fd885ffbc4881548ab04728403c2e53c3d067d10 WatchSource:0}: Error finding container 66474d1fafcf617722929f45fd885ffbc4881548ab04728403c2e53c3d067d10: Status 404 returned error can't find the container with id 66474d1fafcf617722929f45fd885ffbc4881548ab04728403c2e53c3d067d10 Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.132990 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.140013 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.196561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209067 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209125 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209152 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jsc\" (UniqueName: \"kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209227 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209262 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209284 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209348 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209368 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209412 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209433 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209546 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209592 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.209758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210635 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210691 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210729 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210827 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.210896 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.211420 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.211589 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.211707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.211877 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.211933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.212655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.220292 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.221281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.233041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jsc\" (UniqueName: \"kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc\") pod \"ovnkube-node-rwctq\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.243986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.255596 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.264629 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.282993 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.293892 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.304074 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.323775 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.375358 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:04 crc kubenswrapper[4688]: W0930 17:16:04.387870 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb5dfa5a_bc52_43b4_a301_730ccb234480.slice/crio-12d95964b85acd8483f3bf0607468716c1a05281fc9b387ce2c6ed66e00fb8c2 WatchSource:0}: Error finding container 12d95964b85acd8483f3bf0607468716c1a05281fc9b387ce2c6ed66e00fb8c2: Status 404 returned error can't find the container with id 12d95964b85acd8483f3bf0607468716c1a05281fc9b387ce2c6ed66e00fb8c2 Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.428668 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.428870 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.589344 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.589948 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.591766 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394" exitCode=255 Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.591858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.591924 4688 scope.go:117] "RemoveContainer" containerID="03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.593687 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerStarted","Data":"6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.593885 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerStarted","Data":"69a8cd87368a389a12c22d3a17105bdde6f46340261a21262576cd303903d85b"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.597968 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"781f3817c8747b87403684f5cacd3211cdc54457eabb8ed2b68298311febd0aa"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.601690 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.601742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39c1a9ae5ab3078f7f04c59bf1cca5eeafd70c2df80ecfaf18dd11255e874257"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.609229 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.609938 4688 scope.go:117] "RemoveContainer" containerID="42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394" Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.611051 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.614203 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.614255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.614269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"f5aa376ffbd201d80de8e386f25489d5ec620d1aab4f5aa0b0426387f6f4d9cf"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.617479 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.617885 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-82htv" event={"ID":"e3f10831-407a-4e6c-adb7-014837e056bd","Type":"ContainerStarted","Data":"f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.617919 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-82htv" event={"ID":"e3f10831-407a-4e6c-adb7-014837e056bd","Type":"ContainerStarted","Data":"e9e1fbd68c224e25e1c644bec14f436b3919afad522f42eb3a38bb2c9e1e3252"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.620970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerStarted","Data":"eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.621040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerStarted","Data":"66474d1fafcf617722929f45fd885ffbc4881548ab04728403c2e53c3d067d10"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.622686 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" exitCode=0 Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.622764 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.622806 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"12d95964b85acd8483f3bf0607468716c1a05281fc9b387ce2c6ed66e00fb8c2"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.625169 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.625240 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.625259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2e0f63d7ebe35fbc21b6344b97478648e6ac088cd3419251986003c67ad5ae34"} Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.629418 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.641269 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.652789 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.667461 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.680993 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.691749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.698682 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.713887 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.727126 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.737871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.748099 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.762599 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.791602 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.834238 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.872381 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.914216 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.916609 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:04 crc kubenswrapper[4688]: E0930 17:16:04.916855 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:06.91683217 +0000 UTC m=+24.212269728 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.953581 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:15:57Z\\\",\\\"message\\\":\\\"W0930 17:15:46.741276 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 17:15:46.741501 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759252546 cert, and key in /tmp/serving-cert-3018751186/serving-signer.crt, /tmp/serving-cert-3018751186/serving-signer.key\\\\nI0930 17:15:47.093359 1 observer_polling.go:159] Starting file observer\\\\nW0930 17:15:47.096042 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 17:15:47.096276 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:15:47.098274 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3018751186/tls.crt::/tmp/serving-cert-3018751186/tls.key\\\\\\\"\\\\nF0930 17:15:57.406787 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:04 crc kubenswrapper[4688]: I0930 17:16:04.991850 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:04Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.018155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.018204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.018240 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.018265 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018368 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018385 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018439 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:07.018416408 +0000 UTC m=+24.313853966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018488 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:07.01846319 +0000 UTC m=+24.313900748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018592 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018610 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018622 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018648 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:07.018641824 +0000 UTC m=+24.314079382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018697 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018710 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018719 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.018738 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:07.018731417 +0000 UTC m=+24.314168975 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.031547 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.077425 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.116776 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.150755 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.191489 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.231924 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.429217 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.429272 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.429391 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.429540 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.434220 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.493318 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5m4bx"] Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.493990 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.496303 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.496582 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.496690 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.496815 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.517181 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.534224 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.596497 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.624286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b179aec9-bd56-4dac-acf2-52f6aa82681b-serviceca\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.624330 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b179aec9-bd56-4dac-acf2-52f6aa82681b-kube-api-access-wtnwn\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.624367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b179aec9-bd56-4dac-acf2-52f6aa82681b-host\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636359 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636369 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.636395 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.638276 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720" exitCode=0 Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.638328 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720"} Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.639674 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.642636 4688 scope.go:117] "RemoveContainer" containerID="42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394" Sep 30 17:16:05 crc kubenswrapper[4688]: E0930 17:16:05.642756 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.647516 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.663334 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.686391 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.703411 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03cf4c6a7c8aa72d5818e2add43a9aa9e4bd2475a5102f695e1dd06dd5e2becd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:15:57Z\\\",\\\"message\\\":\\\"W0930 17:15:46.741276 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 17:15:46.741501 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759252546 cert, and key in /tmp/serving-cert-3018751186/serving-signer.crt, /tmp/serving-cert-3018751186/serving-signer.key\\\\nI0930 17:15:47.093359 1 observer_polling.go:159] Starting file observer\\\\nW0930 17:15:47.096042 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 17:15:47.096276 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:15:47.098274 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3018751186/tls.crt::/tmp/serving-cert-3018751186/tls.key\\\\\\\"\\\\nF0930 17:15:57.406787 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.714189 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.725751 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b179aec9-bd56-4dac-acf2-52f6aa82681b-kube-api-access-wtnwn\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.725821 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b179aec9-bd56-4dac-acf2-52f6aa82681b-host\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.725885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b179aec9-bd56-4dac-acf2-52f6aa82681b-serviceca\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.726817 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b179aec9-bd56-4dac-acf2-52f6aa82681b-host\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.726908 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b179aec9-bd56-4dac-acf2-52f6aa82681b-serviceca\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.731994 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.748428 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.750194 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b179aec9-bd56-4dac-acf2-52f6aa82681b-kube-api-access-wtnwn\") pod \"node-ca-5m4bx\" (UID: \"b179aec9-bd56-4dac-acf2-52f6aa82681b\") " pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.777720 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.811434 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.844389 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5m4bx" Sep 30 17:16:05 crc kubenswrapper[4688]: W0930 17:16:05.861951 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb179aec9_bd56_4dac_acf2_52f6aa82681b.slice/crio-2b7214b6747c18cb14a3c86b3ae272bd595c23bd6cb9721c885c9e367054836b WatchSource:0}: Error finding container 2b7214b6747c18cb14a3c86b3ae272bd595c23bd6cb9721c885c9e367054836b: Status 404 returned error can't find the container with id 2b7214b6747c18cb14a3c86b3ae272bd595c23bd6cb9721c885c9e367054836b Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.862723 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.893555 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.932861 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:05 crc kubenswrapper[4688]: I0930 17:16:05.979561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:05Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.011727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.051611 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.088652 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.130596 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.169774 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.217070 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.255511 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.293939 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.334243 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.372879 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.420361 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.429184 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:06 crc kubenswrapper[4688]: E0930 17:16:06.429367 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.459721 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.648418 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5m4bx" event={"ID":"b179aec9-bd56-4dac-acf2-52f6aa82681b","Type":"ContainerStarted","Data":"196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f"} Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.648524 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5m4bx" event={"ID":"b179aec9-bd56-4dac-acf2-52f6aa82681b","Type":"ContainerStarted","Data":"2b7214b6747c18cb14a3c86b3ae272bd595c23bd6cb9721c885c9e367054836b"} Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.651019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153"} Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.654306 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e" exitCode=0 Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.654357 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e"} Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.671765 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.696405 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.730517 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.753973 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.770146 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.780692 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.791094 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.809765 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.828193 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.859716 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.895318 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.936594 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.938965 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:06 crc kubenswrapper[4688]: E0930 17:16:06.939176 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:10.939138562 +0000 UTC m=+28.234576160 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:06 crc kubenswrapper[4688]: I0930 17:16:06.970327 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:06Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.013814 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.040277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.040333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.040374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.040405 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.040535 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.040627 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:11.040606427 +0000 UTC m=+28.336043995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041088 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041130 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041142 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041179 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:11.041168992 +0000 UTC m=+28.336606560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041146 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041195 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041208 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041218 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041223 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:11.041215313 +0000 UTC m=+28.336652881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.041314 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:11.041254534 +0000 UTC m=+28.336692102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.055173 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.093872 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.133588 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.176421 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.216557 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.250182 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.294991 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.330934 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.372620 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.414778 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.428355 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.428386 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.428499 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:07 crc kubenswrapper[4688]: E0930 17:16:07.428909 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.456360 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.496477 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.531854 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.586238 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.663603 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.666150 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555" exitCode=0 Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.666238 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555"} Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.683154 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.702411 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.724899 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.736270 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.772310 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.821038 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.855521 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.901412 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.933968 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:07 crc kubenswrapper[4688]: I0930 17:16:07.975448 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:07Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.015602 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.051232 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.093440 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.129707 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.429108 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:08 crc kubenswrapper[4688]: E0930 17:16:08.429337 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.674724 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935" exitCode=0 Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.674788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935"} Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.695116 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.717943 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.740787 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.758953 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.779028 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.796040 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.806897 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.818374 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.829840 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.843709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.856319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.874204 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.888183 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:08 crc kubenswrapper[4688]: I0930 17:16:08.908200 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:08Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.371410 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.375083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.375154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.375180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.375387 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.384232 4688 kubelet_node_status.go:115] "Node was previously registered" node="crc" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.384694 4688 kubelet_node_status.go:79] "Successfully registered node" node="crc" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.386353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.386618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.386636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.386656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.386673 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.409045 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.415416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.415462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.415475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.415494 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.415510 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.429104 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.429198 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.429258 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.429486 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.430791 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.435370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.435431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.435445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.435462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.435477 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.450870 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.455800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.455868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.455887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.455922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.455952 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.477544 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.481552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.481605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.481614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.481631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.481642 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.505037 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: E0930 17:16:09.505288 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.507382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.507415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.507423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.507437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.507446 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.611046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.611401 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.611413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.611433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.611446 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.691830 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a" exitCode=0 Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.691882 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714286 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714635 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.714687 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.732691 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.750540 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.764550 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.780961 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.793219 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.806332 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.819997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.820042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.820054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.820076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.820089 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.822972 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.837871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.856202 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.869151 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.884872 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.900282 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.918401 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:09Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.923421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.923473 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.923488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.923510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:09 crc kubenswrapper[4688]: I0930 17:16:09.923522 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:09Z","lastTransitionTime":"2025-09-30T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.026952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.026991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.027001 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.027019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.027029 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.130786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.130850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.130860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.130879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.130891 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.233451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.233514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.233534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.233559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.233598 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.337086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.337131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.337142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.337160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.337175 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.428589 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:10 crc kubenswrapper[4688]: E0930 17:16:10.428832 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.440671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.440738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.440756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.440783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.440801 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.543604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.543644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.543659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.543675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.543685 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.647755 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.648146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.648160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.648180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.648194 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.701350 4688 generic.go:334] "Generic (PLEG): container finished" podID="826a3e0f-0f37-4659-a649-b0ce04c8b890" containerID="cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779" exitCode=0 Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.701430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerDied","Data":"cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.712809 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.713317 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.738239 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.755321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.755457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.755481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.755512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.755530 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.762778 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.770438 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.793546 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.816796 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.836412 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.854833 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.863432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.863475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.863489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.863508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.863521 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.868748 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.879954 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.890520 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.903810 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.919561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.944979 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.964362 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.966396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.966435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.966448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.966467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.966479 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:10Z","lastTransitionTime":"2025-09-30T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.977000 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:10 crc kubenswrapper[4688]: I0930 17:16:10.986442 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:10 crc kubenswrapper[4688]: E0930 17:16:10.986696 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:18.986675419 +0000 UTC m=+36.282112977 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.001387 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:10Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.016871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.028752 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.040004 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.049653 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.060362 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.069019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.069068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.069083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.069103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.069115 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.072591 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.083621 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.087206 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.087262 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.087296 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.087331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087395 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087420 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087432 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087431 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087433 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087482 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:19.087463807 +0000 UTC m=+36.382901365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087484 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087541 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:19.087515558 +0000 UTC m=+36.382953116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087454 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087576 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087591 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:19.087549719 +0000 UTC m=+36.382987287 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.087688 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:19.087674462 +0000 UTC m=+36.383112140 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.094620 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.109603 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.117438 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.129473 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.141963 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.152850 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.171593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.171804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.171889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.171958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.172028 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.275015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.275059 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.275069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.275087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.275097 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.377843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.377881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.377894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.377914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.377926 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.428803 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.428883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.429034 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.429189 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.480992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.481050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.481071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.481097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.481117 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.591321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.591390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.591409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.591436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.591454 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.695309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.695687 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.695828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.695963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.696092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.723193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" event={"ID":"826a3e0f-0f37-4659-a649-b0ce04c8b890","Type":"ContainerStarted","Data":"4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.724085 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.724182 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.741340 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.756831 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.771534 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.784700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.800373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.800443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.800462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.800893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.800964 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.802433 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.803163 4688 scope.go:117] "RemoveContainer" containerID="42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.803189 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: E0930 17:16:11.803344 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.817183 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.832018 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.848712 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.861494 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.876345 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.894491 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.903410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.903462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.903477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.903521 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.903534 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:11Z","lastTransitionTime":"2025-09-30T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.910791 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.922675 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.935904 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.948770 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.960837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.972049 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.980220 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:11 crc kubenswrapper[4688]: I0930 17:16:11.990738 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.002431 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.005373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.005401 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.005413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.005430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.005442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.011293 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.024173 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.034273 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.043226 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.056705 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.071320 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.088405 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.101087 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.107905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.107981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.107993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.108049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.108069 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.125890 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:12Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.211519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.211611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.211630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.211659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.211674 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.324984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.325057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.325072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.325092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.325129 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.427845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.427917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.427928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.427948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.427961 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.428297 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:12 crc kubenswrapper[4688]: E0930 17:16:12.428412 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.531135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.531189 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.531204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.531224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.531236 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.633691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.633739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.633749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.633765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.633774 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.736414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.736468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.736478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.736498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.736508 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.838856 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.838891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.838899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.838914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.838923 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.942406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.942444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.942455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.942473 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:12 crc kubenswrapper[4688]: I0930 17:16:12.942485 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:12Z","lastTransitionTime":"2025-09-30T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.044906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.044952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.044963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.044983 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.044995 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.147807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.147860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.147870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.147888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.147899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.250813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.250861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.250873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.250892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.250906 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.354814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.354886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.354901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.354923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.354936 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.428816 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.428816 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:13 crc kubenswrapper[4688]: E0930 17:16:13.429000 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:13 crc kubenswrapper[4688]: E0930 17:16:13.429134 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.452028 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.458035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.458062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.458071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.458087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.458097 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.471117 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.492921 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.506364 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.523169 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.539727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.551351 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.560067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.560121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.560140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.560166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.560182 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.561760 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.584181 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.596695 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.611301 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.621258 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.639605 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.652579 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.663922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.663981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.663995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.664015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.664028 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.732807 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/0.log" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.737637 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23" exitCode=1 Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.737713 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.739202 4688 scope.go:117] "RemoveContainer" containerID="bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.759085 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.769978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.770054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.770074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.770110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.770137 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.775219 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.790992 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.805578 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.828853 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:13Z\\\",\\\"message\\\":\\\"dler 9 for removal\\\\nI0930 17:16:13.105709 5974 factory.go:656] Stopping watch factory\\\\nI0930 17:16:13.105712 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.105745 5974 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:16:13.105827 5974 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0930 17:16:13.105847 5974 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:16:13.105855 5974 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:16:13.105861 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:16:13.105868 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0930 17:16:13.105938 5974 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106014 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106190 5974 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106326 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873101 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.873458 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.904029 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.918809 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.934634 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.954813 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.966909 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.976614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.976649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.976657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.976692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.976702 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:13Z","lastTransitionTime":"2025-09-30T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.979382 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:13 crc kubenswrapper[4688]: I0930 17:16:13.994503 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.008620 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.079171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.079231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.079244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.079262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.079275 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.183195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.183286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.183314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.183350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.183375 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.286893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.286964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.286982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.287010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.287028 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.389954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.389998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.390009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.390026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.390041 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.429165 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:14 crc kubenswrapper[4688]: E0930 17:16:14.429397 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.493617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.493689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.493706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.493732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.493748 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.597296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.597360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.597378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.597406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.597429 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.699980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.700035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.700055 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.700080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.700095 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.744062 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/0.log" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.747191 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.747549 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.763055 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.776838 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.793068 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.802738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.802780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.802792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.802813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.802828 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.809089 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.827422 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.842793 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.862500 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.876345 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.889991 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.904747 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.905228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.905273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.905295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.905314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.905327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:14Z","lastTransitionTime":"2025-09-30T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.916790 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.929196 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.940850 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:14 crc kubenswrapper[4688]: I0930 17:16:14.960309 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:13Z\\\",\\\"message\\\":\\\"dler 9 for removal\\\\nI0930 17:16:13.105709 5974 factory.go:656] Stopping watch factory\\\\nI0930 17:16:13.105712 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.105745 5974 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:16:13.105827 5974 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0930 17:16:13.105847 5974 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:16:13.105855 5974 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:16:13.105861 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:16:13.105868 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0930 17:16:13.105938 5974 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106014 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106190 5974 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106326 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:14Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.009318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.009381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.009393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.009412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.009424 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.113031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.113092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.113109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.113134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.113153 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.215924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.215991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.216042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.216075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.216099 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.319472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.319606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.319628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.319658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.319678 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.422768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.422853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.422883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.422916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.422937 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.429179 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.429196 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:15 crc kubenswrapper[4688]: E0930 17:16:15.429323 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:15 crc kubenswrapper[4688]: E0930 17:16:15.429474 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.526157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.526224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.526246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.526281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.526300 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.629719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.629771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.629788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.629813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.629831 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.734373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.734526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.734551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.734631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.734660 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.755709 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/1.log" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.756827 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/0.log" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.760746 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca" exitCode=1 Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.760803 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.760858 4688 scope.go:117] "RemoveContainer" containerID="bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.763102 4688 scope.go:117] "RemoveContainer" containerID="eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca" Sep 30 17:16:15 crc kubenswrapper[4688]: E0930 17:16:15.763485 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.787038 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.806421 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.833935 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.841289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.841332 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.841348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.841369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.841385 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.857938 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.875775 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.894061 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.911075 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.929731 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.943987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.944209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.944431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.944664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.944811 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:15Z","lastTransitionTime":"2025-09-30T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.946460 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.961456 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.978695 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:15 crc kubenswrapper[4688]: I0930 17:16:15.993637 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.006851 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.029207 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:13Z\\\",\\\"message\\\":\\\"dler 9 for removal\\\\nI0930 17:16:13.105709 5974 factory.go:656] Stopping watch factory\\\\nI0930 17:16:13.105712 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.105745 5974 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:16:13.105827 5974 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0930 17:16:13.105847 5974 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:16:13.105855 5974 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:16:13.105861 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:16:13.105868 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0930 17:16:13.105938 5974 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106014 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106190 5974 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106326 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.048326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.048363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.048375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.048393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.048407 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.128833 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j"] Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.129617 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.136297 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.136431 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.149481 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.152040 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.152096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.152115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.152141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.152155 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.168823 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.189236 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.204267 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.235540 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bae027480315648087f39835b9db4019c76062ec86423250b4a00871bc9e0c23\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:13Z\\\",\\\"message\\\":\\\"dler 9 for removal\\\\nI0930 17:16:13.105709 5974 factory.go:656] Stopping watch factory\\\\nI0930 17:16:13.105712 5974 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.105745 5974 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 17:16:13.105827 5974 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0930 17:16:13.105847 5974 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 17:16:13.105855 5974 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 17:16:13.105861 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0930 17:16:13.105868 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0930 17:16:13.105938 5974 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106014 5974 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106190 5974 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 17:16:13.106326 5974 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.249059 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.257016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.257080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.257106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.257143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.257158 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.266904 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.267008 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-env-overrides\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.267111 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9qc\" (UniqueName: \"kubernetes.io/projected/15848e53-ecd7-4722-8ede-8800afdc5e25-kube-api-access-jc9qc\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.267154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15848e53-ecd7-4722-8ede-8800afdc5e25-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.269474 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.283310 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.299110 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.322087 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.335875 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.348047 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.359043 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.360529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.360592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.360605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.360623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.360634 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.368058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15848e53-ecd7-4722-8ede-8800afdc5e25-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.368109 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.368129 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-env-overrides\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.368162 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc9qc\" (UniqueName: \"kubernetes.io/projected/15848e53-ecd7-4722-8ede-8800afdc5e25-kube-api-access-jc9qc\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.368958 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-env-overrides\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.369552 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15848e53-ecd7-4722-8ede-8800afdc5e25-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.372240 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.382517 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15848e53-ecd7-4722-8ede-8800afdc5e25-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.386854 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.388362 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc9qc\" (UniqueName: \"kubernetes.io/projected/15848e53-ecd7-4722-8ede-8800afdc5e25-kube-api-access-jc9qc\") pod \"ovnkube-control-plane-749d76644c-npp6j\" (UID: \"15848e53-ecd7-4722-8ede-8800afdc5e25\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.428839 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:16 crc kubenswrapper[4688]: E0930 17:16:16.429062 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.446899 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" Sep 30 17:16:16 crc kubenswrapper[4688]: W0930 17:16:16.462170 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15848e53_ecd7_4722_8ede_8800afdc5e25.slice/crio-0457393d0c6eec22d7a672539bcadd9356b96852a9b2af9bbfe13fe9fe5a680d WatchSource:0}: Error finding container 0457393d0c6eec22d7a672539bcadd9356b96852a9b2af9bbfe13fe9fe5a680d: Status 404 returned error can't find the container with id 0457393d0c6eec22d7a672539bcadd9356b96852a9b2af9bbfe13fe9fe5a680d Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.464196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.464238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.464255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.464277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.464293 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.566951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.566999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.567009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.567025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.567069 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.671872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.671943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.671961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.671990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.672011 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.768436 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/1.log" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.774123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.774178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.774191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.774209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.774223 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.776304 4688 scope.go:117] "RemoveContainer" containerID="eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca" Sep 30 17:16:16 crc kubenswrapper[4688]: E0930 17:16:16.776618 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.778046 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" event={"ID":"15848e53-ecd7-4722-8ede-8800afdc5e25","Type":"ContainerStarted","Data":"70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.778086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" event={"ID":"15848e53-ecd7-4722-8ede-8800afdc5e25","Type":"ContainerStarted","Data":"0457393d0c6eec22d7a672539bcadd9356b96852a9b2af9bbfe13fe9fe5a680d"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.800160 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.823684 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.835789 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.851155 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.867406 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.877646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.877696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.877708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.877727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.877740 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.891974 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.908687 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.930409 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.948915 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.969458 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.980565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.980641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.980653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.980695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.980709 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:16Z","lastTransitionTime":"2025-09-30T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:16 crc kubenswrapper[4688]: I0930 17:16:16.986619 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.000319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:16Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.013598 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.024870 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.044602 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.083624 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.083697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.083716 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.083748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.083768 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.187416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.187486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.187497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.187515 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.187527 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.290692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.290753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.290766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.290787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.290801 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.393687 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.393767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.393792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.393829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.393852 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.429122 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.429123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:17 crc kubenswrapper[4688]: E0930 17:16:17.429448 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:17 crc kubenswrapper[4688]: E0930 17:16:17.429287 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.497207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.497286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.497305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.497358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.497379 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.600397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.600449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.600461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.600480 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.600495 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.703924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.703963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.703972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.703986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.703996 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.786074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" event={"ID":"15848e53-ecd7-4722-8ede-8800afdc5e25","Type":"ContainerStarted","Data":"ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.807113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.807149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.807160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.807176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.807185 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.810640 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.828991 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.853040 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.866879 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.882184 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.897139 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.909952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.909987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.909999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.910018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.910032 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:17Z","lastTransitionTime":"2025-09-30T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.915089 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.931184 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.944328 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.960259 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.976831 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:17 crc kubenswrapper[4688]: I0930 17:16:17.988806 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.001067 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:17Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.013009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.013052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.013070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.013093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.013108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.015169 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gbfmv"] Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.015826 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.015947 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.021194 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.034990 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.046701 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.059980 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.077116 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.088241 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8gv\" (UniqueName: \"kubernetes.io/projected/23ba3e9d-4109-4a56-aac3-45917eec36d7-kube-api-access-nd8gv\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.088378 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.089199 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.104247 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116378 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.116501 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.131471 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.148365 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.161579 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.174014 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.184026 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.188870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8gv\" (UniqueName: \"kubernetes.io/projected/23ba3e9d-4109-4a56-aac3-45917eec36d7-kube-api-access-nd8gv\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.188911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.189063 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.189126 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:18.689108822 +0000 UTC m=+35.984546380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.200342 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.204916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8gv\" (UniqueName: \"kubernetes.io/projected/23ba3e9d-4109-4a56-aac3-45917eec36d7-kube-api-access-nd8gv\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.219842 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.221857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.221888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.221902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.221920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.221934 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.234694 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.248795 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.260896 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:18Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.325740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.325807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.325824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.325849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.325869 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428536 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.428687 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.428922 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.533924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.535436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.535466 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.535499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.535523 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.638457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.638530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.638556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.638626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.638654 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.693460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.694130 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.694425 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:19.694390584 +0000 UTC m=+36.989828182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.742553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.742649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.742667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.742695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.742715 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.846015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.846100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.846127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.846159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.846183 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.949172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.949254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.949279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.949304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.949323 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:18Z","lastTransitionTime":"2025-09-30T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:18 crc kubenswrapper[4688]: I0930 17:16:18.996831 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:18 crc kubenswrapper[4688]: E0930 17:16:18.997053 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:16:34.997018853 +0000 UTC m=+52.292456451 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.053034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.053227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.053296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.053323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.053345 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.098376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.098473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.098621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.098688 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.098744 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.098763 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099129 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099152 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.098839 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.098906 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099227 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099239 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099314 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:35.099074404 +0000 UTC m=+52.394512002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099346 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:35.099332721 +0000 UTC m=+52.394770309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099372 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:35.099362991 +0000 UTC m=+52.394800589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.099394 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:35.099384962 +0000 UTC m=+52.394822550 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.156964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.157032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.157050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.157077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.157097 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.260843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.260892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.260905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.260924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.260936 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.364278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.364338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.364356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.364376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.364389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.428770 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.428771 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.429179 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.429415 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.429844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.429913 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.467692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.467762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.467781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.467807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.467828 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.570516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.570663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.570682 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.570704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.570722 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.674068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.674110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.674120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.674137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.674149 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.701781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.701833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.701849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.701872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.701885 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.706790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.707103 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.707263 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:21.707226948 +0000 UTC m=+39.002664546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.718469 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:19Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.724554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.724631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.724643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.724667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.724681 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.746207 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:19Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.751119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.751202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.751213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.751228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.751243 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.766443 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:19Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.772677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.772754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.772774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.773179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.773237 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.791902 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:19Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.796807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.796885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.796913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.796945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.797004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.815020 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:19Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:19 crc kubenswrapper[4688]: E0930 17:16:19.815258 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.817237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.817318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.817354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.817386 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.817447 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.922018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.922089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.922113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.922141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:19 crc kubenswrapper[4688]: I0930 17:16:19.922161 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:19Z","lastTransitionTime":"2025-09-30T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.026192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.026264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.026290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.026318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.026338 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.129046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.129163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.129184 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.129214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.129232 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.232234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.232294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.232310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.232331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.232347 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.335161 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.335207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.335218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.335233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.335243 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.428788 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:20 crc kubenswrapper[4688]: E0930 17:16:20.429264 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.439018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.439100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.439126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.439168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.439195 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.542118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.542181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.542194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.542215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.542229 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.645249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.645303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.645320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.645342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.645358 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.748231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.748330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.748352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.748396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.748419 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.851653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.852028 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.852223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.852358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.852477 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.956227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.956307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.956332 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.956365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:20 crc kubenswrapper[4688]: I0930 17:16:20.956389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:20Z","lastTransitionTime":"2025-09-30T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.059705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.059748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.059759 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.059779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.059791 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.163351 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.163420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.163468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.163501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.163521 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.268410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.268780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.268883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.269043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.269146 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.372391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.372459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.372478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.372507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.372527 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.429005 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.429166 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:21 crc kubenswrapper[4688]: E0930 17:16:21.429222 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.429334 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:21 crc kubenswrapper[4688]: E0930 17:16:21.429454 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:21 crc kubenswrapper[4688]: E0930 17:16:21.429549 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.475371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.475440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.475454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.475475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.475491 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.577958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.578322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.578402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.578471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.578532 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.682725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.682786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.682798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.682818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.682832 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: E0930 17:16:21.747138 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:21 crc kubenswrapper[4688]: E0930 17:16:21.747292 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:25.74725351 +0000 UTC m=+43.042691108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.746914 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.785901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.785972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.785990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.786013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.786030 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.889288 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.889356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.889377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.889406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.889426 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.992529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.993165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.993350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.993632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:21 crc kubenswrapper[4688]: I0930 17:16:21.993935 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:21Z","lastTransitionTime":"2025-09-30T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.097117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.097180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.097198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.097223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.097242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.200824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.200900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.200920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.200947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.200967 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.305542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.305657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.305683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.305720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.305743 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.408889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.409273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.409408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.409666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.409824 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.429202 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:22 crc kubenswrapper[4688]: E0930 17:16:22.429376 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.512553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.512613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.512626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.512647 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.512660 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.616429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.616781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.616868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.616956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.617039 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.720237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.720306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.720324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.720352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.720370 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.824046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.824101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.824116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.824141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.824165 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.928154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.928207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.928220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.928240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:22 crc kubenswrapper[4688]: I0930 17:16:22.928264 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:22Z","lastTransitionTime":"2025-09-30T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.030896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.030967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.030982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.031003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.031014 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.134779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.134862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.134881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.134916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.134935 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.238287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.238363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.238382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.238407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.238427 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.342192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.342261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.342274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.342296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.342309 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.428753 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.428892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.428951 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:23 crc kubenswrapper[4688]: E0930 17:16:23.428999 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:23 crc kubenswrapper[4688]: E0930 17:16:23.429118 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:23 crc kubenswrapper[4688]: E0930 17:16:23.429260 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.444980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.445034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.445053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.445083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.445108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.447020 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.465860 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.504787 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.531762 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.548445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.548497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.548508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.548526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.548538 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.557880 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.574849 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.592477 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.612966 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.625192 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.639997 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.651347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.651406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.651420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.651440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.651452 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.652410 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.667171 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.685454 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.698197 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.709306 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.730990 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:23Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.754201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.754239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.754272 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.754291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.754303 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.857786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.857864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.857883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.857911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.857927 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.962121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.962170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.962185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.962203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:23 crc kubenswrapper[4688]: I0930 17:16:23.962218 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:23Z","lastTransitionTime":"2025-09-30T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.065121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.065194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.065217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.065247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.065270 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.169440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.169512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.169536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.169598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.169626 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.272780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.272860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.272884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.272920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.272946 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.376630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.376697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.376713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.376738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.376758 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.429065 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:24 crc kubenswrapper[4688]: E0930 17:16:24.429278 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.430271 4688 scope.go:117] "RemoveContainer" containerID="42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.480416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.480476 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.480580 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.480603 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.480617 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.582717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.582767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.582779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.582797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.582810 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.687033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.687091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.687107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.687133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.687184 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.789831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.789874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.789886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.789905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.789921 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.815031 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.817367 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.817714 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.835618 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.852597 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.868357 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.881234 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.892533 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.892634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.892661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.892694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.892714 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.899738 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.915293 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.937134 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.950442 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.969528 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.989417 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:24Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.996135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.996187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.996199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.996218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:24 crc kubenswrapper[4688]: I0930 17:16:24.996234 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:24Z","lastTransitionTime":"2025-09-30T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.004725 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.019792 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.035552 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.051020 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.064109 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.094614 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:25Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.098461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.098501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.098510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.098528 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.098539 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.201998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.202068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.202084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.202106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.202119 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.304725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.304796 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.304815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.304843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.304862 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.407933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.408034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.408080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.408118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.408143 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.428656 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.428738 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.428989 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:25 crc kubenswrapper[4688]: E0930 17:16:25.428975 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:25 crc kubenswrapper[4688]: E0930 17:16:25.429200 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:25 crc kubenswrapper[4688]: E0930 17:16:25.429381 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.511328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.511370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.511383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.511402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.511415 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.614284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.614324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.614334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.614349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.614359 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.717769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.717830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.717841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.717860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.717871 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.798703 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:25 crc kubenswrapper[4688]: E0930 17:16:25.798849 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:25 crc kubenswrapper[4688]: E0930 17:16:25.798939 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:33.798913955 +0000 UTC m=+51.094351513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.819922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.819987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.820011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.820041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.820065 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.924064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.924116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.924127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.924144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:25 crc kubenswrapper[4688]: I0930 17:16:25.924154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:25Z","lastTransitionTime":"2025-09-30T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.027522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.027583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.027593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.027614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.027625 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.132051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.132131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.132154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.132187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.132209 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.235924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.236005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.236025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.236052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.236071 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.339655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.339744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.339769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.339801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.339822 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.429270 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:26 crc kubenswrapper[4688]: E0930 17:16:26.429493 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.444070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.444146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.444171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.444290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.444321 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.548383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.548451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.548471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.548498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.548517 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.652677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.652767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.652794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.652829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.652855 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.756266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.756333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.756352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.756382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.756402 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.859994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.860047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.860063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.860086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.860102 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.964002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.964070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.964088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.964116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:26 crc kubenswrapper[4688]: I0930 17:16:26.964140 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:26Z","lastTransitionTime":"2025-09-30T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.071340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.071418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.071438 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.071467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.071491 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.175322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.175400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.175426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.175459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.175482 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.278560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.278621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.278634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.278655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.278668 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.383116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.383176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.383196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.383221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.383239 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.428814 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.428836 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.428999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:27 crc kubenswrapper[4688]: E0930 17:16:27.429192 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:27 crc kubenswrapper[4688]: E0930 17:16:27.429750 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:27 crc kubenswrapper[4688]: E0930 17:16:27.429928 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.430407 4688 scope.go:117] "RemoveContainer" containerID="eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.486215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.486276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.486298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.486359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.486381 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.588436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.588874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.588892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.588911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.588923 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.691705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.691775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.691788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.691810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.691823 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.794919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.794972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.794984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.795007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.795019 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.829348 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/1.log" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.833187 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.834465 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.850917 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.867043 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.884494 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.898398 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.899051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.899117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.899134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.899162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.899180 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:27Z","lastTransitionTime":"2025-09-30T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.909817 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.924022 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.940717 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.952739 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.965831 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.978556 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:27 crc kubenswrapper[4688]: I0930 17:16:27.991627 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:27Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.002043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.002079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.002088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.002104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.002114 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.009967 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.022557 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.035763 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.052360 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.075034 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.104619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.104677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.104688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.104709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.104721 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.207972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.208035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.208052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.208073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.208085 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.311107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.311162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.311174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.311194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.311206 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.414819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.414890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.414910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.414938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.414956 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.429236 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:28 crc kubenswrapper[4688]: E0930 17:16:28.429445 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.518668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.518756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.518786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.518820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.518843 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.621880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.621975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.621995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.622029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.622051 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.725748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.725812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.725830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.725856 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.725875 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.829832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.829923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.829956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.829988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.830014 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.840927 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/2.log" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.842334 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/1.log" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.847413 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" exitCode=1 Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.847482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.847548 4688 scope.go:117] "RemoveContainer" containerID="eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.848748 4688 scope.go:117] "RemoveContainer" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" Sep 30 17:16:28 crc kubenswrapper[4688]: E0930 17:16:28.849117 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.872831 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.890272 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.920543 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eaadf81b7e690b7d68d2ce5b54efee31da0d422c94aae1510158867a7652afca\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:15Z\\\",\\\"message\\\":\\\"y.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nF0930 17:16:15.096720 6124 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 17:16:15.096654 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-82htv\\\\nI0930 17:16:15.096738 6124 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rwctq\\\\nI0930 17:16:15.096744 6124 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-82htv in node crc\\\\nI0930 17:16:15.096757 6124 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.933633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.933702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.933720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.933747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.933768 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:28Z","lastTransitionTime":"2025-09-30T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.943545 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.963807 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.976404 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:28 crc kubenswrapper[4688]: I0930 17:16:28.989432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:28Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.015988 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.030333 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.036623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.036665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.036678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.036696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.036709 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.043986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.060310 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.078516 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.093450 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.107286 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.122163 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.138727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.139839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.139893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.139909 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.139934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.139953 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.243524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.243613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.243632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.243661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.243680 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.347300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.347391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.347404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.347449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.347470 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.428714 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.428724 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.428869 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:29 crc kubenswrapper[4688]: E0930 17:16:29.428893 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:29 crc kubenswrapper[4688]: E0930 17:16:29.429065 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:29 crc kubenswrapper[4688]: E0930 17:16:29.429166 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.449814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.449865 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.449875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.449891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.449902 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.552737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.552784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.552795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.552814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.552825 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.655765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.655845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.655865 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.655896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.655918 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.759637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.759712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.759722 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.759739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.759748 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.854810 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/2.log" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.859534 4688 scope.go:117] "RemoveContainer" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" Sep 30 17:16:29 crc kubenswrapper[4688]: E0930 17:16:29.859700 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.862802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.862891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.862913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.862942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.862961 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.882866 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.902203 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.921924 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.940565 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.958196 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.966004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.966051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.966063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.966082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.966095 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:29Z","lastTransitionTime":"2025-09-30T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.976918 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:29 crc kubenswrapper[4688]: I0930 17:16:29.993357 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:29Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.008151 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.028039 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.051346 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.064556 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.071192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.071253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.071265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.071293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.071307 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.087177 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.106986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.121018 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.136214 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.157329 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.174465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.174512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.174523 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.174541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.174555 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.189874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.189933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.189946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.189965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.189979 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.202384 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.206082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.206131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.206144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.206164 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.206176 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.218764 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.222906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.222957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.222972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.222994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.223013 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.234979 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.238264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.238301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.238309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.238328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.238339 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.250251 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.254147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.254207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.254218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.254236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.254248 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.265542 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:30Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.265698 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.282582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.282637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.282649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.282669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.282682 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.385798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.385880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.385901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.385930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.385990 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.428653 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:30 crc kubenswrapper[4688]: E0930 17:16:30.428918 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.489294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.489380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.489404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.489435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.489462 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.593204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.593286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.593307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.593334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.593355 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.697192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.697239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.697248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.697266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.697281 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.800999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.801056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.801067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.801092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.801104 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.904151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.904277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.904296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.904324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:30 crc kubenswrapper[4688]: I0930 17:16:30.904344 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:30Z","lastTransitionTime":"2025-09-30T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.007420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.007492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.007505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.007526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.007540 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.111365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.111449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.111474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.111506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.111530 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.215942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.216020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.216038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.216067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.216131 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.319462 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.319522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.319540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.319610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.319639 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.423027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.423100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.423125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.423157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.423179 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.428476 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.428476 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:31 crc kubenswrapper[4688]: E0930 17:16:31.428741 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.428780 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:31 crc kubenswrapper[4688]: E0930 17:16:31.428984 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:31 crc kubenswrapper[4688]: E0930 17:16:31.429184 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.526727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.526786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.526804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.526829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.526851 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.630832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.630905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.630939 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.631021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.631047 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.734138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.734210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.734228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.734256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.734275 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.838217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.838311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.838337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.838372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.838400 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.941544 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.941648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.941668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.941698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:31 crc kubenswrapper[4688]: I0930 17:16:31.941716 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:31Z","lastTransitionTime":"2025-09-30T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.045051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.045143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.045182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.045224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.045261 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.149201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.149258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.149277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.149297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.149311 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.252043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.252113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.252141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.252169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.252189 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.355723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.355790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.355809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.355834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.355853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.428481 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:32 crc kubenswrapper[4688]: E0930 17:16:32.428657 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.459673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.459787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.459815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.459844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.459866 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.563198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.563257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.563269 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.563308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.563332 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.665898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.666638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.666681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.666701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.666714 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.772001 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.772061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.772082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.772115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.772139 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.874902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.874967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.874985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.875011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.875028 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.978742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.978804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.978815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.978834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:32 crc kubenswrapper[4688]: I0930 17:16:32.978845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:32Z","lastTransitionTime":"2025-09-30T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.082330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.082437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.082468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.082508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.082532 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.187772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.187928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.187951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.187988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.188012 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.290815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.290862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.290874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.290891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.290903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.393946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.394057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.394082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.394116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.394140 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.428508 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.428674 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.428744 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:33 crc kubenswrapper[4688]: E0930 17:16:33.428923 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:33 crc kubenswrapper[4688]: E0930 17:16:33.429055 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:33 crc kubenswrapper[4688]: E0930 17:16:33.429202 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.444616 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.463183 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.477385 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.492804 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.497743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.497778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.497791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.497809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.497822 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.508245 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.524691 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.542100 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.566556 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.586282 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.600268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.600331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.600345 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.600368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.600382 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.604520 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.619275 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.643632 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.657233 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.672217 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.682709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.693640 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:33Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.702762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.702821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.702838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.702867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.702882 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.806551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.806681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.806706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.806739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.806766 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.896529 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:33 crc kubenswrapper[4688]: E0930 17:16:33.896777 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:33 crc kubenswrapper[4688]: E0930 17:16:33.896886 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:16:49.896856104 +0000 UTC m=+67.192293702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.910336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.910403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.910426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.910459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:33 crc kubenswrapper[4688]: I0930 17:16:33.910482 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:33Z","lastTransitionTime":"2025-09-30T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.013857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.013920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.013938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.013965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.013983 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.117850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.117930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.117962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.117998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.118018 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.221975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.222046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.222065 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.222098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.222118 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.324928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.324989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.325001 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.325022 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.325035 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.427888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.427927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.427937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.427954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.427964 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.428520 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:34 crc kubenswrapper[4688]: E0930 17:16:34.428644 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.531135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.531205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.531217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.531236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.531248 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.634002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.634068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.634089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.634116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.634135 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.738130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.738196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.738214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.738244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.738264 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.840938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.841025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.841049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.841081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.841108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.945223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.945321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.945340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.945366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:34 crc kubenswrapper[4688]: I0930 17:16:34.945386 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:34Z","lastTransitionTime":"2025-09-30T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.010067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.010548 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:17:07.010488156 +0000 UTC m=+84.305925764 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.048531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.048625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.048636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.048654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.048664 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.111880 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.111973 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.112027 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112107 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:17:07.112084225 +0000 UTC m=+84.407521793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.112132 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.112175 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112202 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112261 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:17:07.112242539 +0000 UTC m=+84.407680137 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112289 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112304 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112318 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112361 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:17:07.112351552 +0000 UTC m=+84.407789120 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112435 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112478 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112494 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.112593 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:17:07.112551177 +0000 UTC m=+84.407988745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.152383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.152445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.152457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.152475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.152490 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.255894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.256355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.256366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.256389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.256401 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.359209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.359281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.359294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.359316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.359331 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.429396 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.429484 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.429597 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.429758 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.430707 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:35 crc kubenswrapper[4688]: E0930 17:16:35.430784 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.461676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.461743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.461768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.461806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.461832 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.565020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.565089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.565105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.565129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.565145 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.669084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.669741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.669802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.669839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.669865 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.774363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.774448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.774471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.774505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.774534 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.877650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.877689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.877698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.877713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.877723 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.981179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.981676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.981905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.982095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:35 crc kubenswrapper[4688]: I0930 17:16:35.982319 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:35Z","lastTransitionTime":"2025-09-30T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.086555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.086904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.086988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.087093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.087183 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.190445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.190500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.190513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.190534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.190547 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.294010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.294400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.294534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.294708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.294823 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.397992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.398359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.398463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.398537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.398644 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.428295 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:36 crc kubenswrapper[4688]: E0930 17:16:36.428631 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.501433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.501486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.501501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.501522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.501534 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.604908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.604960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.604970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.604988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.605002 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.707768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.707802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.707812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.707828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.707838 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.811072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.811125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.811142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.811164 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.811242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.914418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.914485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.914501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.914524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:36 crc kubenswrapper[4688]: I0930 17:16:36.914605 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:36Z","lastTransitionTime":"2025-09-30T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.016850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.016893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.016902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.016916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.016926 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.120248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.120312 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.120329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.120358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.120374 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.223688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.223750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.223767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.223792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.223811 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.327434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.327491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.327508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.327534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.327551 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.428907 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.428956 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.428916 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:37 crc kubenswrapper[4688]: E0930 17:16:37.429549 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.430904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.430927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.430938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.430956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.430969 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: E0930 17:16:37.431012 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:37 crc kubenswrapper[4688]: E0930 17:16:37.430897 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.535278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.536327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.536858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.537120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.537389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.641487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.641547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.641604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.641631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.641648 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.745059 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.745112 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.745130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.745154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.745170 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.891630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.891967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.892118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.892208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.892286 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.995018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.995334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.995420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.995528 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:37 crc kubenswrapper[4688]: I0930 17:16:37.995639 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:37Z","lastTransitionTime":"2025-09-30T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.099257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.099318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.099328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.099346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.099357 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.202872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.202917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.202929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.202949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.202960 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.305987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.306130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.306143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.306180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.306190 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.414486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.414541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.414553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.414588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.414601 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.428400 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:38 crc kubenswrapper[4688]: E0930 17:16:38.428770 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.517006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.517041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.517053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.517070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.517081 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.618900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.618943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.618952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.618968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.618979 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.721283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.721314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.721323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.721367 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.721404 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.826533 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.829002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.829053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.829088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.829112 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.932707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.932766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.932790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.932819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:38 crc kubenswrapper[4688]: I0930 17:16:38.932843 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:38Z","lastTransitionTime":"2025-09-30T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.036862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.036928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.036940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.036962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.036978 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.140816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.140880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.140896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.140920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.140937 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.244030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.244105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.244124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.244149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.244168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.348101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.348172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.348194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.348226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.348245 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.428607 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.428677 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.428642 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:39 crc kubenswrapper[4688]: E0930 17:16:39.428911 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:39 crc kubenswrapper[4688]: E0930 17:16:39.428995 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:39 crc kubenswrapper[4688]: E0930 17:16:39.429113 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.451306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.451377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.451404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.451435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.451460 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.510111 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.525917 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.533290 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.550208 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.554457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.554496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.554514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.554539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.554558 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.568690 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.587956 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.607024 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.629033 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.648936 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.657844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.657924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.657941 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.657967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.657984 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.669989 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.693239 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.729432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.760377 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.761434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.761488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.761501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.761529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.761543 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.776251 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.788982 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.800732 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.813798 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.825936 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:39Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.866280 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.866407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.866459 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.866493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.866550 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.970640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.970698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.970718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.970764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:39 crc kubenswrapper[4688]: I0930 17:16:39.970783 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:39Z","lastTransitionTime":"2025-09-30T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.073650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.073724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.073741 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.073770 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.073790 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.177355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.177421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.177434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.177457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.177472 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.281115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.281188 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.281205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.281233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.281251 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.384767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.384860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.384898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.384933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.384956 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.428792 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.428978 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.445790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.445873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.445893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.445920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.445939 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.467678 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:40Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.473499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.473551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.473583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.473601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.473615 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.488892 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:40Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.495133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.495177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.495195 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.495221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.495240 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.514937 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:40Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.520692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.520759 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.520778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.520810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.520829 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.542783 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:40Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.548359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.548418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.548430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.548454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.548471 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.569163 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:40Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:40 crc kubenswrapper[4688]: E0930 17:16:40.569358 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.571506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.571562 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.571609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.571633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.571651 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.674608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.674663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.674678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.674701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.674718 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.777596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.777641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.777655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.777672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.777684 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.881158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.881215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.881231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.881257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.881279 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.984376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.984455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.984475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.984507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:40 crc kubenswrapper[4688]: I0930 17:16:40.984531 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:40Z","lastTransitionTime":"2025-09-30T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.088000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.088068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.088091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.088124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.088148 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.191915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.191993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.192018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.192052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.192074 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.295482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.295686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.295762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.295789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.295845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.398970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.399048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.399072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.399105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.399126 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.428787 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.428868 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:41 crc kubenswrapper[4688]: E0930 17:16:41.429015 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.429085 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:41 crc kubenswrapper[4688]: E0930 17:16:41.429278 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:41 crc kubenswrapper[4688]: E0930 17:16:41.429426 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.431412 4688 scope.go:117] "RemoveContainer" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" Sep 30 17:16:41 crc kubenswrapper[4688]: E0930 17:16:41.432053 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.502390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.502447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.502467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.502491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.502509 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.606077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.606148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.606171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.606201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.606223 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.710643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.710713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.710737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.710768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.710788 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.814334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.814409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.814434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.814467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.814494 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.917970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.918038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.918062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.918092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:41 crc kubenswrapper[4688]: I0930 17:16:41.918115 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:41Z","lastTransitionTime":"2025-09-30T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.021521 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.021632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.021650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.021677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.021699 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.125334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.125418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.125449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.125481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.125510 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.229914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.229985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.230002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.230029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.230050 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.333661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.333734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.333757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.333794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.333818 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.428496 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:42 crc kubenswrapper[4688]: E0930 17:16:42.428754 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.437492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.437559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.437601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.437629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.437646 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.540450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.540506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.540519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.540546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.540633 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.643656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.643694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.643707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.643729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.643743 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.747017 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.747110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.747132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.747153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.747170 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.854500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.854564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.854621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.854672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.854685 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.956929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.957000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.957025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.957059 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:42 crc kubenswrapper[4688]: I0930 17:16:42.957087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:42Z","lastTransitionTime":"2025-09-30T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.061209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.061278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.061296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.061323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.061342 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.164684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.164750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.164763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.164784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.164798 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.268230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.268268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.268277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.268293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.268302 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.371053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.371097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.371105 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.371120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.371130 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.428534 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.428636 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:43 crc kubenswrapper[4688]: E0930 17:16:43.428696 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.428775 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:43 crc kubenswrapper[4688]: E0930 17:16:43.428915 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:43 crc kubenswrapper[4688]: E0930 17:16:43.429016 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.447461 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.465961 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.474920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.474984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.475005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.475031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.475049 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.479741 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.494558 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.506607 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.520332 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.535505 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.550474 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.572483 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.579060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.579111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.579124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.579144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.579158 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.594204 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.611329 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.625776 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.641251 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.659021 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.674824 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.682702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.682753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.682769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.682791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.682803 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.689486 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.701371 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.713776 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.727120 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.742128 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.759328 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.773854 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.786422 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.786486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.786501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.786524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.786539 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.791201 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.813934 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.828159 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.843050 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.858478 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.871442 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.884496 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.889290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.889340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.889349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.889365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.889377 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.905463 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.920944 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.937338 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.947536 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.959205 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.973817 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:43Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.991988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.992035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.992049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.992070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:43 crc kubenswrapper[4688]: I0930 17:16:43.992086 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:43Z","lastTransitionTime":"2025-09-30T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.094391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.094452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.094464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.094488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.094502 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.197870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.197935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.197955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.198000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.198065 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.301395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.301478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.301508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.301543 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.301616 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.404465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.404503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.404513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.404533 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.404548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.428878 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:44 crc kubenswrapper[4688]: E0930 17:16:44.429021 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.508194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.508253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.508270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.508293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.508363 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.612490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.612553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.612596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.612625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.612644 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.716041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.716134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.716175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.716209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.716232 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.819281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.820290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.820346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.820372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.820388 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.923600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.923663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.923681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.923710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:44 crc kubenswrapper[4688]: I0930 17:16:44.923731 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:44Z","lastTransitionTime":"2025-09-30T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.027230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.027300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.027324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.027354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.027375 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.130789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.130823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.130832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.130923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.130935 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.234256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.234327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.234414 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.234444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.234462 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.338452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.338523 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.338546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.338604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.338633 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.429167 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.429263 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.429310 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:45 crc kubenswrapper[4688]: E0930 17:16:45.429399 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:45 crc kubenswrapper[4688]: E0930 17:16:45.429675 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:45 crc kubenswrapper[4688]: E0930 17:16:45.429821 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.442276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.442338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.442355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.442376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.442389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.545545 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.545634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.545652 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.545678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.545696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.649379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.649478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.649516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.649551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.649624 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.753203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.753258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.753319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.753346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.753364 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.856563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.856666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.856680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.856701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.856718 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.959849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.959894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.959906 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.959924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:45 crc kubenswrapper[4688]: I0930 17:16:45.959937 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:45Z","lastTransitionTime":"2025-09-30T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.063046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.063107 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.063124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.063149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.063168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.166346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.166415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.166431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.166454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.166471 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.270375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.270458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.270483 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.270514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.270535 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.374785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.374885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.374905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.374930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.374948 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.428937 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:46 crc kubenswrapper[4688]: E0930 17:16:46.429191 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.479052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.479119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.479137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.479174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.479196 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.582726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.582774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.582786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.582805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.582818 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.686757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.686849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.686871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.686897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.686945 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.791042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.791113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.791132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.791166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.791188 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.894165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.894236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.894260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.894293 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.894330 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.997504 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.997604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.997625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.997653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:46 crc kubenswrapper[4688]: I0930 17:16:46.997671 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:46Z","lastTransitionTime":"2025-09-30T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.100717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.100768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.100785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.100811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.100830 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.203943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.204010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.204038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.204065 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.204089 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.307767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.307824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.307844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.307868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.307886 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.410546 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.410649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.410667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.410693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.410711 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.428304 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.428378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:47 crc kubenswrapper[4688]: E0930 17:16:47.428456 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.428305 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:47 crc kubenswrapper[4688]: E0930 17:16:47.428686 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:47 crc kubenswrapper[4688]: E0930 17:16:47.428770 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.514445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.514524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.514542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.514626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.514667 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.617392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.617481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.617509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.617539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.617561 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.720918 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.721012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.721027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.721045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.721057 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.823997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.824114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.824136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.824197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.824219 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.927511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.927633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.927695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.927728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:47 crc kubenswrapper[4688]: I0930 17:16:47.928322 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:47Z","lastTransitionTime":"2025-09-30T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.032935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.032981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.032994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.033014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.033026 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.136097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.136182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.136196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.136219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.136233 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.238932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.238978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.238991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.239011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.239024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.341518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.341607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.341627 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.341651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.341669 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.428368 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:48 crc kubenswrapper[4688]: E0930 17:16:48.428543 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.444969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.445023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.445033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.445051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.445063 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.548290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.548326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.548335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.548350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.548360 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.651594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.651640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.651658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.651680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.651697 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.754813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.754859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.754868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.754888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.754899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.858179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.858229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.858246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.858264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.858276 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.961336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.961382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.961391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.961411 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:48 crc kubenswrapper[4688]: I0930 17:16:48.961421 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:48Z","lastTransitionTime":"2025-09-30T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.066010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.066061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.066077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.066095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.066106 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.170145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.170194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.170205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.170224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.170234 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.273994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.274057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.274070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.274091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.274106 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.377198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.377250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.377261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.377279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.377292 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.428998 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:49 crc kubenswrapper[4688]: E0930 17:16:49.429232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.429229 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.429286 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:49 crc kubenswrapper[4688]: E0930 17:16:49.429352 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:49 crc kubenswrapper[4688]: E0930 17:16:49.429641 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.484191 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.484283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.484315 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.484353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.484377 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.587096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.587146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.587157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.587178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.587191 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.689681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.689731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.689744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.689762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.689777 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.792536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.792599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.792609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.792625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.792634 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.895014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.895052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.895062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.895079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.895091 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.981877 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:49 crc kubenswrapper[4688]: E0930 17:16:49.982018 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:49 crc kubenswrapper[4688]: E0930 17:16:49.982098 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:17:21.982077962 +0000 UTC m=+99.277515510 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.998291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.998345 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.998356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.998376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:49 crc kubenswrapper[4688]: I0930 17:16:49.998386 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:49Z","lastTransitionTime":"2025-09-30T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.100865 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.100943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.100961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.100993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.101005 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.203631 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.203662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.203671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.203686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.203696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.306549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.306719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.306737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.306757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.306769 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.409614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.409660 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.409673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.409693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.409705 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.429143 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.429281 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.512602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.513048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.513216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.513536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.513778 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.617304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.618362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.618592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.618812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.619031 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.633390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.633736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.633904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.634131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.634327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.653345 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.657731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.657773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.657787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.657804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.657815 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.671478 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.675129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.675154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.675163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.675176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.675185 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.685877 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.689700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.689738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.689749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.689767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.689775 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.703120 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.709020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.709060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.709068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.709082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.709095 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.723288 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:50Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:50 crc kubenswrapper[4688]: E0930 17:16:50.723661 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.725182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.725209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.725219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.725231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.725241 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.827873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.827911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.827922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.827945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.827967 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.930997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.931083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.931103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.931134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:50 crc kubenswrapper[4688]: I0930 17:16:50.931155 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:50Z","lastTransitionTime":"2025-09-30T17:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.035173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.035754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.035767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.035786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.035799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.138964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.139026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.139042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.139066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.139079 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.241753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.241821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.241833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.241852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.241867 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.345351 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.345416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.345434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.345463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.345487 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.428872 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.428936 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.428872 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:51 crc kubenswrapper[4688]: E0930 17:16:51.429072 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:51 crc kubenswrapper[4688]: E0930 17:16:51.429201 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:51 crc kubenswrapper[4688]: E0930 17:16:51.429286 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.448321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.448356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.448365 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.448380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.448390 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.550926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.550980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.550992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.551013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.551028 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.653760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.653805 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.653817 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.653834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.653845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.757193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.757253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.757270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.757299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.757317 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.860268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.860363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.860387 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.860421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.860440 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.947531 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/0.log" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.947648 4688 generic.go:334] "Generic (PLEG): container finished" podID="03cb23bc-dcc1-46e7-b238-67a6163f456d" containerID="6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416" exitCode=1 Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.947702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerDied","Data":"6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.948330 4688 scope.go:117] "RemoveContainer" containerID="6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968187 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:51Z","lastTransitionTime":"2025-09-30T17:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.968525 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.981584 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:51 crc kubenswrapper[4688]: I0930 17:16:51.993389 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:51Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.008447 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.026307 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.038868 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.056818 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.070921 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.070988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.071007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.071037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.071056 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.071921 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.091944 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.110139 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.122387 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.134899 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.148534 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.162648 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175068 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.175434 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.195020 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.209246 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.278270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.278304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.278314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.278328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.278340 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.381314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.381377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.381393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.381417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.381433 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.428316 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:52 crc kubenswrapper[4688]: E0930 17:16:52.428476 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.429730 4688 scope.go:117] "RemoveContainer" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.483899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.483934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.483944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.483960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.483972 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.587140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.587173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.587182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.587199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.587210 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.689752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.689792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.689800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.689834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.689847 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.804616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.804666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.804679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.804701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.804713 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.908246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.908296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.908310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.908337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.908349 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:52Z","lastTransitionTime":"2025-09-30T17:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.955392 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/2.log" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.960306 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.961166 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.964514 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/0.log" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.964600 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerStarted","Data":"d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1"} Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.976126 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:52 crc kubenswrapper[4688]: I0930 17:16:52.997002 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:52Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.010732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.010780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.010793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.010812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.010824 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.017087 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.031328 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.041582 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.051470 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.062303 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.074797 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.089614 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.105142 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.113104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.113137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.113147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.113162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.113173 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.118060 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.132605 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.148739 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.161262 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.172390 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.190596 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.205861 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.215984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.216067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.216087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.216116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.216136 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.223429 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.238158 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.256101 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.268645 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.293328 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.309051 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.318141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.318175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.318187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.318240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.318250 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.327291 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.342835 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.358612 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.378164 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.389027 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.399446 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.416537 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.420674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.420707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.420719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.420740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.420753 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.429234 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:53 crc kubenswrapper[4688]: E0930 17:16:53.429352 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.429548 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:53 crc kubenswrapper[4688]: E0930 17:16:53.429645 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.429827 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:53 crc kubenswrapper[4688]: E0930 17:16:53.429905 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.432613 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.445030 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.458027 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.484683 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.501909 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.514804 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.523442 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.523504 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.523524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.523549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.523592 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.536349 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.549062 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.568284 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.581370 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.594834 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.606610 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.619889 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.625326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.625353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.625404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.625423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.625433 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.637106 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.649281 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.662231 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.676128 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.688926 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.700060 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.715876 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728347 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.728795 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.831430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.831491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.831506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.831531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.831547 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.933650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.934010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.934120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.934226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.934325 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:53Z","lastTransitionTime":"2025-09-30T17:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.969678 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/3.log" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.970867 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/2.log" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.975092 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" exitCode=1 Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.975153 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.975204 4688 scope.go:117] "RemoveContainer" containerID="5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.975927 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:16:53 crc kubenswrapper[4688]: E0930 17:16:53.976110 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:53 crc kubenswrapper[4688]: I0930 17:16:53.990605 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:53Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.001771 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.019017 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f712af7db90f2beea03633a4a746e512d56b708f3825a903e60f73d9b803b3a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:28Z\\\",\\\"message\\\":\\\"metrics per-node LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578850 6361 services_controller.go:453] Built service openshift-kube-storage-version-migrator-operator/metrics template LB for network=default: []services.LB{}\\\\nI0930 17:16:28.578862 6361 services_controller.go:454] Service openshift-kube-storage-version-migrator-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0930 17:16:28.578623 6361 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9wbfw\\\\nF0930 17:16:28.578817 6361 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x50\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:53Z\\\",\\\"message\\\":\\\"ions:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 17:16:53.265337 6688 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:16:53.263091 6688 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.030083 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.036335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.036470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.036560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.036689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.036782 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.042284 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.053514 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.063866 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.076378 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.089075 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.102249 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.123152 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.134064 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.139303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.139361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.139373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.139392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.139406 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.146045 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.156493 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.166398 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.178005 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.190258 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.242265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.242305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.242317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.242337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.242351 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.345279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.345342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.345355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.345375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.345389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.429222 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:54 crc kubenswrapper[4688]: E0930 17:16:54.429395 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.447903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.447960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.447977 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.448018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.448037 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.551318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.551381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.551412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.551431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.551442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.654595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.654643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.654654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.654671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.654683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.757447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.757498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.757508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.757525 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.757538 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.860320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.860387 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.860402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.860427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.860442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.962697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.962760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.962776 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.962797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.962810 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:54Z","lastTransitionTime":"2025-09-30T17:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.980426 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/3.log" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.984503 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:16:54 crc kubenswrapper[4688]: E0930 17:16:54.984685 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:16:54 crc kubenswrapper[4688]: I0930 17:16:54.999764 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:54Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.012649 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.029473 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:53Z\\\",\\\"message\\\":\\\"ions:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 17:16:53.265337 6688 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:16:53.263091 6688 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.040808 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.053135 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066176 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.066851 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.076690 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.089940 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.102000 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.114664 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.127456 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.136698 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.147897 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.160039 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.168585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.168633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.168653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.168674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.168687 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.171759 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.186143 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.198778 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:16:55Z is after 2025-08-24T17:21:41Z" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.274825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.275402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.275445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.275471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.275488 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.378457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.378515 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.378528 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.378550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.378564 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.429340 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.429349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.429374 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:55 crc kubenswrapper[4688]: E0930 17:16:55.429838 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:55 crc kubenswrapper[4688]: E0930 17:16:55.429577 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:55 crc kubenswrapper[4688]: E0930 17:16:55.430039 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.481447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.481500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.481517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.481541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.481558 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.584405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.584485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.584505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.584565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.584622 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.687412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.687482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.687506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.687532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.687548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.790425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.790491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.790508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.790531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.790548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.894014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.894063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.894079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.894111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.894130 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.997364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.997421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.997432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.997451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:55 crc kubenswrapper[4688]: I0930 17:16:55.997465 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:55Z","lastTransitionTime":"2025-09-30T17:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.100179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.100254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.100266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.100291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.100305 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.202585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.202695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.202711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.202732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.202746 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.304818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.304924 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.304955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.305007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.305034 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.408100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.408144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.408155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.408171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.408182 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.429211 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:56 crc kubenswrapper[4688]: E0930 17:16:56.429368 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.512309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.512371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.512384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.512407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.512420 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.615684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.615753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.615768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.615786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.615799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.719250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.719349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.719376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.719403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.719420 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.822726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.822779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.822789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.822806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.822817 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.925814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.925892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.925914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.925946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:56 crc kubenswrapper[4688]: I0930 17:16:56.925965 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:56Z","lastTransitionTime":"2025-09-30T17:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.029289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.029356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.029372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.029406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.029420 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.132880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.132937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.132949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.132967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.132982 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.236060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.236111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.236121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.236140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.236153 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.338890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.338934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.338952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.338970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.338981 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.428502 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.428610 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:57 crc kubenswrapper[4688]: E0930 17:16:57.428693 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.428732 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:57 crc kubenswrapper[4688]: E0930 17:16:57.428841 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:57 crc kubenswrapper[4688]: E0930 17:16:57.428929 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.440826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.440897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.440908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.440929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.440944 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.544630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.544692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.544705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.544727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.544739 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.648503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.648564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.648599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.648630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.648644 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.751614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.751707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.751721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.751742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.751754 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.854116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.854174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.854187 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.854206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.854220 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.957948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.958000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.958012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.958031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:57 crc kubenswrapper[4688]: I0930 17:16:57.958043 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:57Z","lastTransitionTime":"2025-09-30T17:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.061098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.061163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.061183 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.061208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.061227 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.164588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.164641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.164654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.164680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.164696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.267860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.267922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.267934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.267953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.267965 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.370967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.371016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.371029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.371046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.371062 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.428634 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:16:58 crc kubenswrapper[4688]: E0930 17:16:58.428825 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.474278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.474337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.474353 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.474379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.474398 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.577977 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.578045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.578065 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.578095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.578115 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.681433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.681523 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.681536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.681557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.681573 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.784912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.785075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.785092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.785111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.785122 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.887168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.887239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.887255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.887272 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.887284 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.990684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.990732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.990751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.990772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:58 crc kubenswrapper[4688]: I0930 17:16:58.990782 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:58Z","lastTransitionTime":"2025-09-30T17:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.093931 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.093968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.093978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.093994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.094006 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.196993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.197061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.197078 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.197103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.197123 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.300523 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.300567 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.300614 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.300633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.300646 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.404284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.404349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.404362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.404381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.404394 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.429291 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.429312 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.429458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:16:59 crc kubenswrapper[4688]: E0930 17:16:59.429669 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:16:59 crc kubenswrapper[4688]: E0930 17:16:59.429805 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:16:59 crc kubenswrapper[4688]: E0930 17:16:59.429938 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.507173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.507213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.507226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.507244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.507257 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.609186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.609230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.609239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.609260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.609271 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.711775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.711820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.711832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.711851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.711866 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.823910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.823999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.824024 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.824060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.824092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.927642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.927700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.927713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.927736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:16:59 crc kubenswrapper[4688]: I0930 17:16:59.927749 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:16:59Z","lastTransitionTime":"2025-09-30T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.030299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.030356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.030370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.030392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.030408 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.138321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.138382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.138623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.138655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.138672 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.241989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.242072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.242095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.242125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.242143 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.345156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.345214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.345232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.345257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.345272 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.428338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.428538 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.447933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.447991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.448005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.448029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.448045 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.551410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.551449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.551460 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.551479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.551490 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.654384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.654440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.654452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.654471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.654484 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.756899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.756961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.756978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.757002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.757018 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.805176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.805231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.805244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.805263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.805274 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.823976 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.829163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.829206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.829216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.829233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.829243 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.841955 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.846186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.846212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.846220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.846234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.846245 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.860915 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.864773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.864832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.864844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.864866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.864878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.879098 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.882514 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.882563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.882611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.882633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.882657 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.901225 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:00Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:00 crc kubenswrapper[4688]: E0930 17:17:00.901435 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.903697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.903734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.903745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.903769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:00 crc kubenswrapper[4688]: I0930 17:17:00.903783 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:00Z","lastTransitionTime":"2025-09-30T17:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.005786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.005847 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.005862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.005883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.005896 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.108302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.108351 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.108361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.108379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.108389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.211816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.211863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.211874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.211890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.211904 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.314451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.314520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.314533 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.314556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.314571 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.417179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.417440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.417454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.417478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.417494 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.428372 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.428401 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:01 crc kubenswrapper[4688]: E0930 17:17:01.428502 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.428682 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:01 crc kubenswrapper[4688]: E0930 17:17:01.428804 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:01 crc kubenswrapper[4688]: E0930 17:17:01.428925 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.520204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.520278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.520299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.520327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.520350 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.624692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.624754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.624771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.624798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.624817 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.728505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.728562 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.728584 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.728650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.728686 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.832121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.832220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.832252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.832301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.832335 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.936424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.936512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.936536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.936604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:01 crc kubenswrapper[4688]: I0930 17:17:01.936628 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:01Z","lastTransitionTime":"2025-09-30T17:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.044154 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.044707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.045849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.045896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.045923 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.149772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.149861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.149890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.149926 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.149954 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.253277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.253338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.253350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.253371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.253384 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.356558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.356644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.356660 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.356678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.356688 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.429245 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:02 crc kubenswrapper[4688]: E0930 17:17:02.429488 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.459646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.459707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.459721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.459743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.459758 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.562670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.562783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.562810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.562841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.562861 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.666499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.666610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.666638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.666672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.666695 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.770170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.770249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.770274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.770307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.770332 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.874828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.874882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.874902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.874928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.874947 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.978982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.979050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.979070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.979099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:02 crc kubenswrapper[4688]: I0930 17:17:02.979118 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:02Z","lastTransitionTime":"2025-09-30T17:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.081767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.081839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.081861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.081893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.081915 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.184150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.184205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.184216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.184237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.184437 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.287268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.287307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.287320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.287339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.287351 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.389825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.389863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.389874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.389891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.389902 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.428542 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:03 crc kubenswrapper[4688]: E0930 17:17:03.428704 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.428748 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:03 crc kubenswrapper[4688]: E0930 17:17:03.428835 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.429333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:03 crc kubenswrapper[4688]: E0930 17:17:03.431080 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.442827 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.455752 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.466849 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.478849 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.493013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.493045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.493058 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.493077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.493090 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.496646 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.508953 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.526067 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.537926 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.549676 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.559877 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.571705 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.583602 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.595125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.595171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.595185 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.595204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.595219 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.597902 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.609477 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.620730 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.629744 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.653231 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:53Z\\\",\\\"message\\\":\\\"ions:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 17:16:53.265337 6688 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:16:53.263091 6688 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:03Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.698618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.698695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.698726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.698769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.698876 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.802369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.802430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.802444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.802465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.802478 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.905472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.905520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.905531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.905549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:03 crc kubenswrapper[4688]: I0930 17:17:03.905560 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:03Z","lastTransitionTime":"2025-09-30T17:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.008358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.008413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.008426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.008446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.008463 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.111262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.111296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.111308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.111322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.111333 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.213273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.213344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.213362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.213389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.213408 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.315914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.315959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.315970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.315988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.315998 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.422935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.422966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.422975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.422989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.422999 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.428881 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:04 crc kubenswrapper[4688]: E0930 17:17:04.429004 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.525661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.525711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.525723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.525739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.525752 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.629428 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.629472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.629485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.629502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.629513 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.732123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.732471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.732482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.732502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.732513 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.834649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.834700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.834712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.834730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.834747 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.937733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.937779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.937789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.937809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:04 crc kubenswrapper[4688]: I0930 17:17:04.937821 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:04Z","lastTransitionTime":"2025-09-30T17:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.039968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.040030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.040048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.040072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.040090 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.143045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.143097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.143109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.143130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.143143 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.245815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.245861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.245872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.245891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.245906 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.348363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.348423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.348437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.348458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.348471 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.428897 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.428930 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:05 crc kubenswrapper[4688]: E0930 17:17:05.429145 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.428949 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:05 crc kubenswrapper[4688]: E0930 17:17:05.429256 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:05 crc kubenswrapper[4688]: E0930 17:17:05.429296 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.451145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.451194 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.451206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.451228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.451242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.554843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.554883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.554894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.554912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.554924 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.657869 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.657950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.657982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.658062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.658104 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.760706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.760766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.760778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.760794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.760828 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.866689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.866792 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.866818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.866854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.866878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.971339 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.971431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.971450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.971479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:05 crc kubenswrapper[4688]: I0930 17:17:05.971498 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:05Z","lastTransitionTime":"2025-09-30T17:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.075521 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.075661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.075688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.075717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.075737 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.178426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.178477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.178489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.178508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.178522 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.282478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.282553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.282609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.282638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.282656 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.386136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.386209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.386224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.386246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.386260 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.428546 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:06 crc kubenswrapper[4688]: E0930 17:17:06.428793 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.489389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.489453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.489465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.489485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.489499 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.592901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.592958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.592975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.592997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.593015 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.696472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.696545 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.696602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.696638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.696662 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.800712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.800800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.800838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.800869 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.800892 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.904193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.904265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.904288 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.904320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:06 crc kubenswrapper[4688]: I0930 17:17:06.904343 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:06Z","lastTransitionTime":"2025-09-30T17:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.009180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.009233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.009244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.009262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.009274 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.074616 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.075043 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.074991787 +0000 UTC m=+148.370429385 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.112006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.112184 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.112200 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.112222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.112237 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.175865 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.175965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.176020 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.176062 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176181 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176318 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.176296028 +0000 UTC m=+148.471733606 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176191 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176371 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176482 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176414 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176543 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176195 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176612 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.176600906 +0000 UTC m=+148.472038464 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176509 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176660 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.176630567 +0000 UTC m=+148.472068135 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.176713 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.176690938 +0000 UTC m=+148.472128536 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.215033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.215080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.215092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.215111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.215125 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.317389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.317466 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.317487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.317513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.317532 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.421039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.421120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.421142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.421169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.421189 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.428612 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.428629 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.428887 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.428936 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.429054 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:07 crc kubenswrapper[4688]: E0930 17:17:07.429362 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.523478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.523711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.523732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.523758 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.523775 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.626879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.626928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.626949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.626970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.626985 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.730158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.730199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.730213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.730231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.730245 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.833830 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.833876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.833886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.833901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.833911 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.938165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.938240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.938265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.938305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:07 crc kubenswrapper[4688]: I0930 17:17:07.938327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:07Z","lastTransitionTime":"2025-09-30T17:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.041715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.041783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.041801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.041827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.041846 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.145656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.145748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.145763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.145784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.145819 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.249907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.250010 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.250030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.250056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.250108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.353982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.354049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.354068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.354095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.354118 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.429125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:08 crc kubenswrapper[4688]: E0930 17:17:08.434218 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.457336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.457409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.457427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.457453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.457472 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.560301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.560375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.560393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.560423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.560441 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.663917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.664006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.664031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.664064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.664088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.767565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.767781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.767810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.767835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.767853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.870976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.871071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.871095 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.871128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.871153 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.974424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.974488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.974502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.974522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:08 crc kubenswrapper[4688]: I0930 17:17:08.974535 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:08Z","lastTransitionTime":"2025-09-30T17:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.077390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.077468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.077492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.077526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.077771 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.182173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.182217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.182227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.182243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.182255 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.284952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.285012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.285027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.285049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.285063 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.387738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.387810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.387831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.387859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.387878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.429185 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.429229 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.429398 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:09 crc kubenswrapper[4688]: E0930 17:17:09.429392 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:09 crc kubenswrapper[4688]: E0930 17:17:09.429471 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:09 crc kubenswrapper[4688]: E0930 17:17:09.429546 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.491367 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.491445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.491468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.491502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.491525 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.595022 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.595110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.595132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.595159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.595177 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.697860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.697928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.697946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.697977 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.697999 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.801324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.801398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.801416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.801444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.801463 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.904691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.904742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.904755 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.904774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:09 crc kubenswrapper[4688]: I0930 17:17:09.904789 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:09Z","lastTransitionTime":"2025-09-30T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.008442 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.008508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.008531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.008563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.008624 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.112416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.112481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.112517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.112554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.112616 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.215945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.216019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.216043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.216077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.216101 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.319798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.319866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.319884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.319910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.319929 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.422938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.422990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.423009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.423036 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.423054 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.428616 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:10 crc kubenswrapper[4688]: E0930 17:17:10.429306 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.429853 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:17:10 crc kubenswrapper[4688]: E0930 17:17:10.430200 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.526910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.526962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.526980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.527000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.527013 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.630221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.630281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.630308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.630329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.630342 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.735295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.735363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.735380 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.735407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.735427 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.845296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.845389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.845407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.845434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.845453 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.948524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.948560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.948569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.948620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:10 crc kubenswrapper[4688]: I0930 17:17:10.948630 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:10Z","lastTransitionTime":"2025-09-30T17:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.051516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.051617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.051642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.051673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.051696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.155147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.155206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.155226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.155255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.155280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.164927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.165018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.165041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.165068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.165088 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.216299 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.221761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.221841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.221862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.221892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.221910 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.248926 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.253819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.253876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.253890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.253912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.253927 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.274241 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.278554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.278629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.278646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.278669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.278683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.293320 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.298981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.299081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.299139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.299164 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.299214 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.313939 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T17:17:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ffc5c315-94a9-4b6f-ba92-5cbd216a8d9c\\\",\\\"systemUUID\\\":\\\"88424a46-ac45-4c7d-88cf-12b6be491796\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:11Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.314099 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.316419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.316455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.316465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.316482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.316494 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.419503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.419626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.419655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.419693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.419718 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.429349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.429349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.429377 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.429658 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.429730 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:11 crc kubenswrapper[4688]: E0930 17:17:11.429786 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.522945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.523011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.523034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.523064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.523087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.626051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.626108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.626120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.626137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.626149 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.728760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.728820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.728837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.728862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.728881 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.832680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.832744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.832762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.832788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.832806 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.936249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.936320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.936338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.936363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:11 crc kubenswrapper[4688]: I0930 17:17:11.936382 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:11Z","lastTransitionTime":"2025-09-30T17:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.039413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.039481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.039516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.039547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.039606 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.142791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.142852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.142869 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.142895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.142913 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.246485 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.246548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.246592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.246634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.246678 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.350057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.350135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.350153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.350180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.350197 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.429376 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:12 crc kubenswrapper[4688]: E0930 17:17:12.429778 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.442832 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.452476 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.452524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.452536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.452552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.452562 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.556198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.556262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.556281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.556306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.556322 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.661192 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.661263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.661284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.661320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.661335 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.765140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.765198 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.765208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.765224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.765235 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.869182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.869283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.869300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.869325 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.869342 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.972270 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.972320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.972330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.972346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:12 crc kubenswrapper[4688]: I0930 17:17:12.972357 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:12Z","lastTransitionTime":"2025-09-30T17:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.075605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.075690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.075713 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.075747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.075766 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.180456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.180538 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.180567 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.180648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.180674 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.284242 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.284341 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.284368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.284409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.284436 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.388140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.388206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.388224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.388284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.388309 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.428534 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.428681 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.428534 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:13 crc kubenswrapper[4688]: E0930 17:17:13.428902 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:13 crc kubenswrapper[4688]: E0930 17:17:13.429232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:13 crc kubenswrapper[4688]: E0930 17:17:13.429121 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.456510 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8030a95b77340ed1b4cf796d7f42cadf9df067fd21a2aece070528f3b12b0bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.478313 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f607cccb-8e9d-47ff-8c93-6f4a791fa36b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a7e6391542f06e5e0774e7d01de4d5574c3b67d0525298f67e0789469b986cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6c9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jggn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.491272 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.491374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.491400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.491432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.491458 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.503352 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5dfa5a-bc52-43b4-a301-730ccb234480\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:53Z\\\",\\\"message\\\":\\\"ions:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 17:16:53.265337 6688 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console-operator/metrics]} name:Service_openshift-console-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.88:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ebd4748e-0473-49fb-88ad-83dbb221791a}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 17:16:53.263091 6688 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5jsc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rwctq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.523837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.543320 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.558687 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-82htv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3f10831-407a-4e6c-adb7-014837e056bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f105c0d3f98b3f827904c3f1a639a67d4bbd263fa346db0f90cc1c7ca8a59724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj7t5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-82htv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.572554 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23ba3e9d-4109-4a56-aac3-45917eec36d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8gv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gbfmv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.590642 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9wbfw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03cb23bc-dcc1-46e7-b238-67a6163f456d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T17:16:51Z\\\",\\\"message\\\":\\\"2025-09-30T17:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1\\\\n2025-09-30T17:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_dfb2f781-be95-41c1-9f9f-32047a46cbf1 to /host/opt/cni/bin/\\\\n2025-09-30T17:16:06Z [verbose] multus-daemon started\\\\n2025-09-30T17:16:06Z [verbose] Readiness Indicator file check\\\\n2025-09-30T17:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqkpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9wbfw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.593774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.593815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.593832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.593853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.593870 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.611294 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-28cnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"826a3e0f-0f37-4659-a649-b0ce04c8b890\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4329bc8a85bb5d028c48f3d29d37c36f2f3672d5c1701fc0ea7dd691d8b95cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca8a3ee53c84d4ffcebc28746ad4369bcedba3464ed483be9ecf624d79f9720\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d39abc45d8793219328adb83de79724de94af2469937dae988a91f2ffbc3fb5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aedf50ca1c16d1ba9edd376040c9fe69d0c7fc8125ef1cc886a0672ccd445555\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbbbe318a06160ea889b2ff594ab5a59a7130b98faa1f91875cab9588bc18935\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3019096d2b5527095b4d6267c33f4f242527ec7c1fde570a0c4e0d853b29f01a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd5bb1302aff7d616e0600ac39aa4f05a9a88f406fd8dc85978ae8edac86b779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:16:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-822nf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-28cnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.623425 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5m4bx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b179aec9-bd56-4dac-acf2-52f6aa82681b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://196eee8b45ba0b727fbb8b2869a519843cad68665723050a359cfbf5e54d645f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wtnwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5m4bx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.636332 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaf2ef97-698b-4102-8f75-31f3ea7c586c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71050bd4365f2cc10ad82bf182ecf86dc17712a6c355674ff4fa09894810a022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4165f08b0c37f20fa78a8c680933ec886fe2e509fefc3a0e5dc235e86da1782\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4165f08b0c37f20fa78a8c680933ec886fe2e509fefc3a0e5dc235e86da1782\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.654804 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63bf325b-d537-41d2-bd85-0fb9167c7afa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7352a47a09cce284d9aa678db815ed99d1eccd46a23a35dcb488d6ff91329a91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://903bec89c242d2052f9c57eeac4f1b2b08a7894a32b052747ae8053a9f8cbf6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebe9c67e50e43e62a18685d98706066ffd0672c2cd8e27efc28f61e4523f2506\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c597b788554b3b8bfca2f3873afff8e31d8f950fc46fecc211848ca096a04e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.668821 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23096acf-56fe-4004-857a-f0f5f68e7d5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fef705c09affaf0e700ad920f765dbaecf6b4de9faecb3e8e34941dd837fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c20d2265db959de5267dd57af27d7cfb91715b7aa7b56ed2bb7824f94197d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://764aad3c1ba174df1a272346f4ee3d25460d802dbbccea13c1b77d30c81787d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e4652091c232e47f71453b2c6fadb04c11648d8c9bf152b9da90399384fb48c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.683215 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.697780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.697883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.697911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.697944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.697970 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.704091 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5176e83474ce46e674e2a2a878a20dde4c0c03b7faddc9c0092666290cd31153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.721748 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15848e53-ecd7-4722-8ede-8800afdc5e25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70678e4c92e2c334564e0d8e35e5c14c97144f633c12d46cfbd0fd6c59d9b982\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1659cafd145f1a1edd4bc959494c53b40be463bd29e3d1f4a662e68420411d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jc9qc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:16:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-npp6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.741962 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5548858f-7460-466d-b7f0-69039ee1cb86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://739aff664ac87c8916f3d0e70a957105424fb5cc6f3eaa799d1a1d2c6b1382f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f46a5091ecc6632ad0c8cb664ab9fc3262292b0b89ddcae0254ece4a9b7f89d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80a09a920ef6c47ea6385c67c96e7d8dc3cc6ac55f2df365ed7322765bc75794\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9c986a21ff4f1fc95fa96f7834b3de3ff3003cb37259772827bc048448ba8cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d08ae3e851caf380f251875e7517242e628a43306d6afede1edea3ccc67394\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T17:16:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 17:16:03.139172 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 17:16:03.139326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 17:16:03.140108 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3867514691/tls.crt::/tmp/serving-cert-3867514691/tls.key\\\\\\\"\\\\nI0930 17:16:03.474628 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 17:16:03.486050 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 17:16:03.486073 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 17:16:03.486095 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 17:16:03.486100 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 17:16:03.539597 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 17:16:03.539632 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539637 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 17:16:03.539643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 17:16:03.539649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 17:16:03.539652 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 17:16:03.539656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 17:16:03.539875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 17:16:03.542437 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80dc019066a49ab39c6ebf1f2f58c7bdb3e2e1894f5af4fda72357f52ac0e87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:15:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4e5fa5bca0c5c55d30b44f4aacae4b0aa7d8280ee7bea601b7cb1f3ca16e33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T17:15:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T17:15:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T17:15:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.758294 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T17:16:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb2f1f162ddcc8533fa55e36f89ec98fb9e897ebbec44ebac79e2fb6b7fda64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08141582177f3b3c3d327f98e8cd328c12f028d720c1f9ab164505cede936ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T17:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T17:17:13Z is after 2025-08-24T17:21:41Z" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.800861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.800923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.800949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.800979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.801003 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.904992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.905039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.905056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.905083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:13 crc kubenswrapper[4688]: I0930 17:17:13.905101 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:13Z","lastTransitionTime":"2025-09-30T17:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.008744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.008808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.008826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.008858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.008883 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.112643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.112775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.112796 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.112823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.112845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.216426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.216532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.216551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.216604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.216625 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.320398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.320465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.320479 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.320503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.320521 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.429172 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:14 crc kubenswrapper[4688]: E0930 17:17:14.429479 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.440059 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.440122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.440143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.440264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.440285 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.543999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.544069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.544099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.544149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.544173 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.648386 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.648450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.648487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.648530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.648555 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.752457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.752547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.752608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.752640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.752659 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.856857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.856922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.856940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.856966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.856986 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.960897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.960967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.960985 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.961016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:14 crc kubenswrapper[4688]: I0930 17:17:14.961034 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:14Z","lastTransitionTime":"2025-09-30T17:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.063767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.063843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.063860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.063891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.063912 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.167506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.167610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.167637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.167668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.167691 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.271319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.271377 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.271389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.271408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.271422 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.375285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.375342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.375355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.375374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.375387 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.428430 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.428538 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:15 crc kubenswrapper[4688]: E0930 17:17:15.428789 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.428831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:15 crc kubenswrapper[4688]: E0930 17:17:15.429224 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:15 crc kubenswrapper[4688]: E0930 17:17:15.429389 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.478364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.478432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.478445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.478463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.478476 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.582393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.582499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.582556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.582626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.582647 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.686347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.686415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.686427 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.686447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.686466 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.788719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.788771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.788780 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.788799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.788809 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.892039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.892117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.892139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.892167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.892185 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.995193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.995268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.995289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.995316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:15 crc kubenswrapper[4688]: I0930 17:17:15.995338 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:15Z","lastTransitionTime":"2025-09-30T17:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.099725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.099803 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.099823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.099854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.099873 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.204024 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.204109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.204134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.204165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.204186 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.307893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.307973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.307994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.308024 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.308047 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.410828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.410908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.410931 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.410962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.410984 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.428692 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:16 crc kubenswrapper[4688]: E0930 17:17:16.428863 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.514381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.514475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.514502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.514535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.514560 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.618310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.618369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.618384 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.618403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.618422 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.721513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.721591 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.721606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.721628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.721643 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.824298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.824356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.824376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.824403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.824423 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.928084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.928496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.928609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.928697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:16 crc kubenswrapper[4688]: I0930 17:17:16.928770 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:16Z","lastTransitionTime":"2025-09-30T17:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.032970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.033017 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.033030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.033049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.033059 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.136186 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.136248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.136266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.136291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.136310 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.240023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.240104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.240126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.240156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.240215 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.343516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.343595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.343610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.343630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.343648 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.428961 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.429021 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.429081 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:17 crc kubenswrapper[4688]: E0930 17:17:17.429232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:17 crc kubenswrapper[4688]: E0930 17:17:17.429414 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:17 crc kubenswrapper[4688]: E0930 17:17:17.429473 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.446880 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.446976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.447002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.447047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.447074 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.550286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.550344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.550363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.550387 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.550404 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.653445 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.653511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.653560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.653611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.653626 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.756139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.756177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.756188 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.756205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.756218 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.859857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.859991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.860014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.860045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.860068 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.963526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.963694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.963718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.963807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:17 crc kubenswrapper[4688]: I0930 17:17:17.963903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:17Z","lastTransitionTime":"2025-09-30T17:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.067896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.067949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.067961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.067981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.067997 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.171836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.172061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.172092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.172124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.172147 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.276230 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.276289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.276308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.276331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.276344 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.379679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.379779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.379799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.379827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.379847 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.429036 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:18 crc kubenswrapper[4688]: E0930 17:17:18.429233 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.482806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.482905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.482925 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.482957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.482978 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.585909 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.585968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.585976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.585993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.586005 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.688437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.688516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.688532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.688561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.688713 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.792136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.792215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.792241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.792275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.792299 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.895800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.895896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.895915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.895957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.895972 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:18Z","lastTransitionTime":"2025-09-30T17:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:18 crc kubenswrapper[4688]: I0930 17:17:18.999851 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.000369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.000648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.000900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.001095 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.104721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.104774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.104791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.104818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.104836 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.208560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.208674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.208697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.208730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.208753 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.312337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.312406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.312426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.312460 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.312479 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.415986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.416039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.416051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.416071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.416084 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.428898 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.428948 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.428996 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:19 crc kubenswrapper[4688]: E0930 17:17:19.429112 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:19 crc kubenswrapper[4688]: E0930 17:17:19.429253 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:19 crc kubenswrapper[4688]: E0930 17:17:19.429410 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.519086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.519137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.519149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.519169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.519183 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.621826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.621916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.621933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.621957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.621980 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.724963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.725029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.725045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.725071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.725092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.827957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.828007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.828016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.828036 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.828049 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.932673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.932740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.932758 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.932783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:19 crc kubenswrapper[4688]: I0930 17:17:19.932803 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:19Z","lastTransitionTime":"2025-09-30T17:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.035618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.035670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.035680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.035697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.035707 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.138406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.138491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.138529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.138559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.138615 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.241947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.242699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.242752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.242774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.242790 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.345807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.345873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.345887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.345903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.345916 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.429312 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:20 crc kubenswrapper[4688]: E0930 17:17:20.429624 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.450647 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.450746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.450773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.450815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.450845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.554608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.554696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.554719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.554749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.554771 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.658063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.658116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.658133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.658162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.658180 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.766091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.766152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.766166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.766189 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.766204 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.869321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.869391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.869409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.869437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.869455 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.973684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.973841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.973894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.973922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:20 crc kubenswrapper[4688]: I0930 17:17:20.973941 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:20Z","lastTransitionTime":"2025-09-30T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.078262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.078400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.078432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.078468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.078531 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:21Z","lastTransitionTime":"2025-09-30T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.183361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.183425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.183449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.183481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.183504 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:21Z","lastTransitionTime":"2025-09-30T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.287004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.287113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.287127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.287148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.287163 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:21Z","lastTransitionTime":"2025-09-30T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.375836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.375918 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.375945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.375975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.375997 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T17:17:21Z","lastTransitionTime":"2025-09-30T17:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.428523 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.428653 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.428543 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:21 crc kubenswrapper[4688]: E0930 17:17:21.428860 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:21 crc kubenswrapper[4688]: E0930 17:17:21.429066 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:21 crc kubenswrapper[4688]: E0930 17:17:21.429232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.460111 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc"] Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.460681 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.468174 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.468349 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.468465 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.468895 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.502788 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podStartSLOduration=78.502761002 podStartE2EDuration="1m18.502761002s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.498945292 +0000 UTC m=+98.794382940" watchObservedRunningTime="2025-09-30 17:17:21.502761002 +0000 UTC m=+98.798198590" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.544592 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0f1e34-8451-458c-8040-47687018be3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.544688 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0f1e34-8451-458c-8040-47687018be3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.544720 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.544786 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.544821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0f1e34-8451-458c-8040-47687018be3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.563222 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-82htv" podStartSLOduration=78.563191552 podStartE2EDuration="1m18.563191552s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.563151951 +0000 UTC m=+98.858589529" watchObservedRunningTime="2025-09-30 17:17:21.563191552 +0000 UTC m=+98.858629130" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.584746 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.584723956 podStartE2EDuration="9.584723956s" podCreationTimestamp="2025-09-30 17:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.584690065 +0000 UTC m=+98.880127623" watchObservedRunningTime="2025-09-30 17:17:21.584723956 +0000 UTC m=+98.880161524" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.598965 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.598943437 podStartE2EDuration="1m18.598943437s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.59867633 +0000 UTC m=+98.894113888" watchObservedRunningTime="2025-09-30 17:17:21.598943437 +0000 UTC m=+98.894380995" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.611056 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.611035854 podStartE2EDuration="42.611035854s" podCreationTimestamp="2025-09-30 17:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.610688965 +0000 UTC m=+98.906126553" watchObservedRunningTime="2025-09-30 17:17:21.611035854 +0000 UTC m=+98.906473412" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.644602 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9wbfw" podStartSLOduration=78.64454992 podStartE2EDuration="1m18.64454992s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.643252236 +0000 UTC m=+98.938689814" watchObservedRunningTime="2025-09-30 17:17:21.64454992 +0000 UTC m=+98.939987478" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645534 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645606 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0f1e34-8451-458c-8040-47687018be3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645667 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0f1e34-8451-458c-8040-47687018be3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0f1e34-8451-458c-8040-47687018be3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.645816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.646520 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e0f1e34-8451-458c-8040-47687018be3d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.646593 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9e0f1e34-8451-458c-8040-47687018be3d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.651709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0f1e34-8451-458c-8040-47687018be3d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.661261 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-28cnb" podStartSLOduration=78.661236487 podStartE2EDuration="1m18.661236487s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.660907728 +0000 UTC m=+98.956345296" watchObservedRunningTime="2025-09-30 17:17:21.661236487 +0000 UTC m=+98.956674055" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.671856 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5m4bx" podStartSLOduration=78.671828014 podStartE2EDuration="1m18.671828014s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.671739312 +0000 UTC m=+98.967176910" watchObservedRunningTime="2025-09-30 17:17:21.671828014 +0000 UTC m=+98.967265572" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.672604 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e0f1e34-8451-458c-8040-47687018be3d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vbnbc\" (UID: \"9e0f1e34-8451-458c-8040-47687018be3d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.702783 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-npp6j" podStartSLOduration=78.702753323 podStartE2EDuration="1m18.702753323s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.691613331 +0000 UTC m=+98.987050899" watchObservedRunningTime="2025-09-30 17:17:21.702753323 +0000 UTC m=+98.998190891" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.719983 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.719963613 podStartE2EDuration="1m17.719963613s" podCreationTimestamp="2025-09-30 17:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:21.719449949 +0000 UTC m=+99.014887507" watchObservedRunningTime="2025-09-30 17:17:21.719963613 +0000 UTC m=+99.015401171" Sep 30 17:17:21 crc kubenswrapper[4688]: I0930 17:17:21.786183 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" Sep 30 17:17:21 crc kubenswrapper[4688]: W0930 17:17:21.805031 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0f1e34_8451_458c_8040_47687018be3d.slice/crio-e2904abb49948ac183897e71ad37f17ecd928b1f87473481a76776564aa7e91d WatchSource:0}: Error finding container e2904abb49948ac183897e71ad37f17ecd928b1f87473481a76776564aa7e91d: Status 404 returned error can't find the container with id e2904abb49948ac183897e71ad37f17ecd928b1f87473481a76776564aa7e91d Sep 30 17:17:22 crc kubenswrapper[4688]: I0930 17:17:22.050526 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:22 crc kubenswrapper[4688]: E0930 17:17:22.050773 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:17:22 crc kubenswrapper[4688]: E0930 17:17:22.050902 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs podName:23ba3e9d-4109-4a56-aac3-45917eec36d7 nodeName:}" failed. No retries permitted until 2025-09-30 17:18:26.050873568 +0000 UTC m=+163.346311126 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs") pod "network-metrics-daemon-gbfmv" (UID: "23ba3e9d-4109-4a56-aac3-45917eec36d7") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 17:17:22 crc kubenswrapper[4688]: I0930 17:17:22.087873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" event={"ID":"9e0f1e34-8451-458c-8040-47687018be3d","Type":"ContainerStarted","Data":"acae8d354cecac8c0319bda9516a3fcea935bc89b3cf392972e7bc992762498a"} Sep 30 17:17:22 crc kubenswrapper[4688]: I0930 17:17:22.087934 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" event={"ID":"9e0f1e34-8451-458c-8040-47687018be3d","Type":"ContainerStarted","Data":"e2904abb49948ac183897e71ad37f17ecd928b1f87473481a76776564aa7e91d"} Sep 30 17:17:22 crc kubenswrapper[4688]: I0930 17:17:22.102845 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vbnbc" podStartSLOduration=79.102827957 podStartE2EDuration="1m19.102827957s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:22.100987419 +0000 UTC m=+99.396424977" watchObservedRunningTime="2025-09-30 17:17:22.102827957 +0000 UTC m=+99.398265535" Sep 30 17:17:22 crc kubenswrapper[4688]: I0930 17:17:22.429021 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:22 crc kubenswrapper[4688]: E0930 17:17:22.430112 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:23 crc kubenswrapper[4688]: I0930 17:17:23.430608 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:23 crc kubenswrapper[4688]: I0930 17:17:23.430620 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:23 crc kubenswrapper[4688]: I0930 17:17:23.430776 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:23 crc kubenswrapper[4688]: E0930 17:17:23.431975 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:23 crc kubenswrapper[4688]: E0930 17:17:23.432093 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:23 crc kubenswrapper[4688]: E0930 17:17:23.432201 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:24 crc kubenswrapper[4688]: I0930 17:17:24.429501 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:17:24 crc kubenswrapper[4688]: E0930 17:17:24.429681 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rwctq_openshift-ovn-kubernetes(cb5dfa5a-bc52-43b4-a301-730ccb234480)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" Sep 30 17:17:24 crc kubenswrapper[4688]: I0930 17:17:24.429927 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:24 crc kubenswrapper[4688]: E0930 17:17:24.430100 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:25 crc kubenswrapper[4688]: I0930 17:17:25.428854 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:25 crc kubenswrapper[4688]: I0930 17:17:25.428942 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:25 crc kubenswrapper[4688]: I0930 17:17:25.428982 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:25 crc kubenswrapper[4688]: E0930 17:17:25.429955 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:25 crc kubenswrapper[4688]: E0930 17:17:25.430191 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:25 crc kubenswrapper[4688]: E0930 17:17:25.430340 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:26 crc kubenswrapper[4688]: I0930 17:17:26.429284 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:26 crc kubenswrapper[4688]: E0930 17:17:26.430134 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:27 crc kubenswrapper[4688]: I0930 17:17:27.429240 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:27 crc kubenswrapper[4688]: I0930 17:17:27.429341 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:27 crc kubenswrapper[4688]: I0930 17:17:27.429425 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:27 crc kubenswrapper[4688]: E0930 17:17:27.429442 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:27 crc kubenswrapper[4688]: E0930 17:17:27.429529 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:27 crc kubenswrapper[4688]: E0930 17:17:27.429673 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:28 crc kubenswrapper[4688]: I0930 17:17:28.429082 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:28 crc kubenswrapper[4688]: E0930 17:17:28.429248 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:29 crc kubenswrapper[4688]: I0930 17:17:29.428528 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:29 crc kubenswrapper[4688]: I0930 17:17:29.428613 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:29 crc kubenswrapper[4688]: E0930 17:17:29.428871 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:29 crc kubenswrapper[4688]: I0930 17:17:29.428609 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:29 crc kubenswrapper[4688]: E0930 17:17:29.429253 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:29 crc kubenswrapper[4688]: E0930 17:17:29.429337 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:30 crc kubenswrapper[4688]: I0930 17:17:30.428707 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:30 crc kubenswrapper[4688]: E0930 17:17:30.428872 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:30 crc kubenswrapper[4688]: I0930 17:17:30.444102 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Sep 30 17:17:31 crc kubenswrapper[4688]: I0930 17:17:31.428770 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:31 crc kubenswrapper[4688]: I0930 17:17:31.428848 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:31 crc kubenswrapper[4688]: I0930 17:17:31.428778 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:31 crc kubenswrapper[4688]: E0930 17:17:31.429198 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:31 crc kubenswrapper[4688]: E0930 17:17:31.429311 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:31 crc kubenswrapper[4688]: E0930 17:17:31.429069 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:32 crc kubenswrapper[4688]: I0930 17:17:32.429253 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:32 crc kubenswrapper[4688]: E0930 17:17:32.429467 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:33 crc kubenswrapper[4688]: I0930 17:17:33.429143 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:33 crc kubenswrapper[4688]: I0930 17:17:33.429172 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:33 crc kubenswrapper[4688]: E0930 17:17:33.430897 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:33 crc kubenswrapper[4688]: I0930 17:17:33.431280 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:33 crc kubenswrapper[4688]: E0930 17:17:33.431390 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:33 crc kubenswrapper[4688]: E0930 17:17:33.431668 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:33 crc kubenswrapper[4688]: I0930 17:17:33.472142 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.472114895 podStartE2EDuration="3.472114895s" podCreationTimestamp="2025-09-30 17:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:33.472099965 +0000 UTC m=+110.767537613" watchObservedRunningTime="2025-09-30 17:17:33.472114895 +0000 UTC m=+110.767552483" Sep 30 17:17:34 crc kubenswrapper[4688]: I0930 17:17:34.429083 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:34 crc kubenswrapper[4688]: E0930 17:17:34.429286 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:35 crc kubenswrapper[4688]: I0930 17:17:35.429033 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:35 crc kubenswrapper[4688]: I0930 17:17:35.429187 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:35 crc kubenswrapper[4688]: I0930 17:17:35.429946 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:35 crc kubenswrapper[4688]: E0930 17:17:35.430180 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:35 crc kubenswrapper[4688]: E0930 17:17:35.430375 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:35 crc kubenswrapper[4688]: E0930 17:17:35.430602 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:36 crc kubenswrapper[4688]: I0930 17:17:36.428974 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:36 crc kubenswrapper[4688]: E0930 17:17:36.429204 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:37 crc kubenswrapper[4688]: I0930 17:17:37.429094 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:37 crc kubenswrapper[4688]: I0930 17:17:37.429131 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:37 crc kubenswrapper[4688]: E0930 17:17:37.429276 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:37 crc kubenswrapper[4688]: E0930 17:17:37.429425 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:37 crc kubenswrapper[4688]: I0930 17:17:37.429684 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:37 crc kubenswrapper[4688]: E0930 17:17:37.430136 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.149123 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/1.log" Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.149703 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/0.log" Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.149764 4688 generic.go:334] "Generic (PLEG): container finished" podID="03cb23bc-dcc1-46e7-b238-67a6163f456d" containerID="d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1" exitCode=1 Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.149814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerDied","Data":"d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1"} Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.149874 4688 scope.go:117] "RemoveContainer" containerID="6dcc4b229b8d058c75f0a0072b2e0b288cfc73c9905a2264fddf031377519416" Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.150242 4688 scope.go:117] "RemoveContainer" containerID="d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1" Sep 30 17:17:38 crc kubenswrapper[4688]: E0930 17:17:38.150467 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9wbfw_openshift-multus(03cb23bc-dcc1-46e7-b238-67a6163f456d)\"" pod="openshift-multus/multus-9wbfw" podUID="03cb23bc-dcc1-46e7-b238-67a6163f456d" Sep 30 17:17:38 crc kubenswrapper[4688]: I0930 17:17:38.428734 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:38 crc kubenswrapper[4688]: E0930 17:17:38.428881 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:39 crc kubenswrapper[4688]: I0930 17:17:39.154720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/1.log" Sep 30 17:17:39 crc kubenswrapper[4688]: I0930 17:17:39.428930 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:39 crc kubenswrapper[4688]: I0930 17:17:39.428969 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:39 crc kubenswrapper[4688]: E0930 17:17:39.429283 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:39 crc kubenswrapper[4688]: I0930 17:17:39.429322 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:39 crc kubenswrapper[4688]: E0930 17:17:39.429993 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:39 crc kubenswrapper[4688]: E0930 17:17:39.430240 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:39 crc kubenswrapper[4688]: I0930 17:17:39.430342 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.160781 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/3.log" Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.164036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerStarted","Data":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.164527 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.281012 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podStartSLOduration=97.280986256 podStartE2EDuration="1m37.280986256s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:17:40.193908638 +0000 UTC m=+117.489346216" watchObservedRunningTime="2025-09-30 17:17:40.280986256 +0000 UTC m=+117.576423814" Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.282472 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gbfmv"] Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.282675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:40 crc kubenswrapper[4688]: E0930 17:17:40.282826 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:40 crc kubenswrapper[4688]: I0930 17:17:40.429149 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:40 crc kubenswrapper[4688]: E0930 17:17:40.429286 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:41 crc kubenswrapper[4688]: I0930 17:17:41.428837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:41 crc kubenswrapper[4688]: I0930 17:17:41.428897 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:41 crc kubenswrapper[4688]: E0930 17:17:41.429036 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:41 crc kubenswrapper[4688]: I0930 17:17:41.429124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:41 crc kubenswrapper[4688]: E0930 17:17:41.429169 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:41 crc kubenswrapper[4688]: E0930 17:17:41.429346 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:42 crc kubenswrapper[4688]: I0930 17:17:42.435663 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:42 crc kubenswrapper[4688]: E0930 17:17:42.435896 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:43 crc kubenswrapper[4688]: E0930 17:17:43.428025 4688 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Sep 30 17:17:43 crc kubenswrapper[4688]: I0930 17:17:43.428288 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:43 crc kubenswrapper[4688]: I0930 17:17:43.428326 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:43 crc kubenswrapper[4688]: E0930 17:17:43.430916 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:43 crc kubenswrapper[4688]: I0930 17:17:43.430963 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:43 crc kubenswrapper[4688]: E0930 17:17:43.431187 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:43 crc kubenswrapper[4688]: E0930 17:17:43.431323 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:43 crc kubenswrapper[4688]: E0930 17:17:43.588878 4688 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 17:17:44 crc kubenswrapper[4688]: I0930 17:17:44.428350 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:44 crc kubenswrapper[4688]: E0930 17:17:44.429202 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:45 crc kubenswrapper[4688]: I0930 17:17:45.429168 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:45 crc kubenswrapper[4688]: I0930 17:17:45.429473 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:45 crc kubenswrapper[4688]: E0930 17:17:45.429690 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:45 crc kubenswrapper[4688]: I0930 17:17:45.429822 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:45 crc kubenswrapper[4688]: E0930 17:17:45.430254 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:45 crc kubenswrapper[4688]: E0930 17:17:45.430548 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:46 crc kubenswrapper[4688]: I0930 17:17:46.429114 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:46 crc kubenswrapper[4688]: E0930 17:17:46.429442 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:47 crc kubenswrapper[4688]: I0930 17:17:47.429261 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:47 crc kubenswrapper[4688]: I0930 17:17:47.429448 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:47 crc kubenswrapper[4688]: E0930 17:17:47.429535 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:47 crc kubenswrapper[4688]: I0930 17:17:47.429662 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:47 crc kubenswrapper[4688]: E0930 17:17:47.429782 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:47 crc kubenswrapper[4688]: E0930 17:17:47.429874 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:48 crc kubenswrapper[4688]: I0930 17:17:48.429179 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:48 crc kubenswrapper[4688]: E0930 17:17:48.429457 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:48 crc kubenswrapper[4688]: E0930 17:17:48.590529 4688 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 17:17:49 crc kubenswrapper[4688]: I0930 17:17:49.428843 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:49 crc kubenswrapper[4688]: I0930 17:17:49.428850 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:49 crc kubenswrapper[4688]: E0930 17:17:49.429072 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:49 crc kubenswrapper[4688]: I0930 17:17:49.429012 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:49 crc kubenswrapper[4688]: E0930 17:17:49.429320 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:49 crc kubenswrapper[4688]: E0930 17:17:49.429657 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:50 crc kubenswrapper[4688]: I0930 17:17:50.428529 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:50 crc kubenswrapper[4688]: E0930 17:17:50.428799 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:50 crc kubenswrapper[4688]: I0930 17:17:50.429282 4688 scope.go:117] "RemoveContainer" containerID="d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1" Sep 30 17:17:51 crc kubenswrapper[4688]: I0930 17:17:51.210551 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/1.log" Sep 30 17:17:51 crc kubenswrapper[4688]: I0930 17:17:51.210992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerStarted","Data":"52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776"} Sep 30 17:17:51 crc kubenswrapper[4688]: I0930 17:17:51.429079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:51 crc kubenswrapper[4688]: I0930 17:17:51.429136 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:51 crc kubenswrapper[4688]: E0930 17:17:51.429320 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:51 crc kubenswrapper[4688]: I0930 17:17:51.429371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:51 crc kubenswrapper[4688]: E0930 17:17:51.429614 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:51 crc kubenswrapper[4688]: E0930 17:17:51.429756 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:52 crc kubenswrapper[4688]: I0930 17:17:52.428777 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:52 crc kubenswrapper[4688]: E0930 17:17:52.428988 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 17:17:53 crc kubenswrapper[4688]: I0930 17:17:53.428681 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:53 crc kubenswrapper[4688]: I0930 17:17:53.428793 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:53 crc kubenswrapper[4688]: E0930 17:17:53.428939 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gbfmv" podUID="23ba3e9d-4109-4a56-aac3-45917eec36d7" Sep 30 17:17:53 crc kubenswrapper[4688]: I0930 17:17:53.428995 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:53 crc kubenswrapper[4688]: E0930 17:17:53.429283 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 17:17:53 crc kubenswrapper[4688]: E0930 17:17:53.431416 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 17:17:54 crc kubenswrapper[4688]: I0930 17:17:54.428405 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:17:54 crc kubenswrapper[4688]: I0930 17:17:54.431609 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Sep 30 17:17:54 crc kubenswrapper[4688]: I0930 17:17:54.431915 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.428906 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.429057 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.429096 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.432349 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.433532 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.433565 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Sep 30 17:17:55 crc kubenswrapper[4688]: I0930 17:17:55.436022 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.812708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.884258 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vxw96"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.885142 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.887988 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.888755 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.889250 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.889942 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.891393 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.891644 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.891847 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.892449 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.892787 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.893363 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.895181 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tfh56"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.895837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.898423 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bfz2g"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.899275 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.904033 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vspn6"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.904860 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.906973 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.907820 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.910118 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.913645 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.914551 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.914553 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.914828 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.914777 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.916862 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.929924 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.929965 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.929995 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.930056 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.929965 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.930484 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.932340 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.933277 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.938204 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.941707 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942122 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942197 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942351 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942484 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942779 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942968 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.943116 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.943404 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.943517 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.943666 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.943976 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.944063 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.944147 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.944516 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.946406 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.946589 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.946714 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.946582 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.946957 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947083 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947192 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947276 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947345 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947411 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947478 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947547 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947743 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947841 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.947906 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948363 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tfh56"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948396 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948474 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948522 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948545 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948628 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948695 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948746 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948764 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948853 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.948974 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.942122 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949176 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949250 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949321 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949317 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949352 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949378 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949406 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949435 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a1453c-5fd6-494f-ba7c-af218220ede9-serving-cert\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949462 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949414 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949593 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-machine-approver-tls\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949636 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-policies\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949641 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949672 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-serving-cert\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949691 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78qt\" (UniqueName: \"kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ca2b5f9-6396-4e66-82df-35688788b49f-serving-cert\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949798 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95a1453c-5fd6-494f-ba7c-af218220ede9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-dir\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-client\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949931 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949950 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.949967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950012 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-image-import-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950031 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2hw\" (UniqueName: \"kubernetes.io/projected/2459f53c-c84d-439c-8da4-4dea0f42a2e3-kube-api-access-dj2hw\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-serving-cert\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8wc\" (UniqueName: \"kubernetes.io/projected/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-kube-api-access-gs8wc\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950096 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950120 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9rl\" (UniqueName: \"kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-auth-proxy-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950135 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950437 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950153 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950782 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-trusted-ca\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950942 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-node-pullsecrets\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.950973 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-images\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951173 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl9n\" (UniqueName: \"kubernetes.io/projected/95a1453c-5fd6-494f-ba7c-af218220ede9-kube-api-access-lbl9n\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951208 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-client\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-config\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951281 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951355 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951385 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951416 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit-dir\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5ll\" (UniqueName: \"kubernetes.io/projected/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-kube-api-access-fk5ll\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951493 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-encryption-config\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951520 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951406 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gx4w\" (UniqueName: \"kubernetes.io/projected/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-kube-api-access-7gx4w\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951704 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.951787 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-config\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.953178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.953400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.955355 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.955975 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.957193 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958734 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5zr\" (UniqueName: \"kubernetes.io/projected/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-kube-api-access-cd5zr\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-encryption-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958887 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958909 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958966 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2459f53c-c84d-439c-8da4-4dea0f42a2e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.958996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2459f53c-c84d-439c-8da4-4dea0f42a2e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.959020 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb52\" (UniqueName: \"kubernetes.io/projected/9ca2b5f9-6396-4e66-82df-35688788b49f-kube-api-access-xbb52\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.963923 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.964119 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.964362 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.964410 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.964531 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.966400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.967772 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6jdt9"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.968067 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.968378 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84"] Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.968813 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:01 crc kubenswrapper[4688]: I0930 17:18:01.969254 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.002008 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.002383 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.002006 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.002857 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.003646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.003978 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.019530 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.020058 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.020423 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.020801 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.020796 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.021462 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.021958 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.022342 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.022872 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.023207 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.023831 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.024102 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.024366 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.024636 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.024889 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.024919 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dcxhn"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.026615 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.027164 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.027549 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.029126 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.029495 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.030154 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.030168 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.030313 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.030886 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t6z28"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.030961 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031074 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031144 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031425 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031517 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031428 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.031829 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032078 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032121 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032210 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032311 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032367 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032436 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.032484 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.033264 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.036078 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.037183 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6qrtn"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.037321 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.038181 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.040811 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.041239 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.045441 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.046338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.048362 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.049853 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.050269 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.054091 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9xbs"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.054855 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.055957 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bhb8d"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.057185 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.060133 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.062629 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063058 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d78qt\" (UniqueName: \"kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063449 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/213ea898-4f81-49ad-afaa-000178b8d1f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063538 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ca2b5f9-6396-4e66-82df-35688788b49f-serving-cert\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063694 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22sr\" (UniqueName: \"kubernetes.io/projected/6f650a2d-90e1-465d-bc6b-1451aa0099ac-kube-api-access-f22sr\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541f325d-4d87-4e4f-93c6-11a1d8406c3e-config\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.063958 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064172 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95a1453c-5fd6-494f-ba7c-af218220ede9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064775 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23776ac-7592-4f59-8b10-27d85e19ae81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064898 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9556dbe-5bce-417e-84f1-1e7ed221d383-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064686 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064587 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.065099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-dir\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.065284 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066260 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066356 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-client\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-service-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067208 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-config\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067293 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-image-import-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067420 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj2hw\" (UniqueName: \"kubernetes.io/projected/2459f53c-c84d-439c-8da4-4dea0f42a2e3-kube-api-access-dj2hw\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067496 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-serving-cert\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs8wc\" (UniqueName: \"kubernetes.io/projected/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-kube-api-access-gs8wc\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067755 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr86n\" (UniqueName: \"kubernetes.io/projected/213ea898-4f81-49ad-afaa-000178b8d1f4-kube-api-access-mr86n\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7adef629-f94a-4968-bd8a-89b01e4f9859-serving-cert\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.067978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9hw\" (UniqueName: \"kubernetes.io/projected/651f3b4e-974b-4637-a09c-2e7b95407c8f-kube-api-access-fh9hw\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9rl\" (UniqueName: \"kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-auth-proxy-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068267 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068341 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068446 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.064743 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95a1453c-5fd6-494f-ba7c-af218220ede9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.066229 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-trusted-ca\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068901 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-node-pullsecrets\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069047 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-images\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wl44\" (UniqueName: \"kubernetes.io/projected/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-kube-api-access-6wl44\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069272 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23776ac-7592-4f59-8b10-27d85e19ae81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069333 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-image-import-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069345 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f650a2d-90e1-465d-bc6b-1451aa0099ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069495 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl9n\" (UniqueName: \"kubernetes.io/projected/95a1453c-5fd6-494f-ba7c-af218220ede9-kube-api-access-lbl9n\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069573 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9556dbe-5bce-417e-84f1-1e7ed221d383-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.074143 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-client\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.074708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-config\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.074835 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.074949 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.075076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.075919 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.086965 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.087399 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit-dir\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.087498 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk5ll\" (UniqueName: \"kubernetes.io/projected/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-kube-api-access-fk5ll\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.087619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-encryption-config\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.068476 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.069686 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088228 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.070480 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.065146 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-dir\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.070822 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.071091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-images\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.075367 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-config\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.071661 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.076035 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.077742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.071749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.087757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088452 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/541f325d-4d87-4e4f-93c6-11a1d8406c3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gx4w\" (UniqueName: \"kubernetes.io/projected/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-kube-api-access-7gx4w\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088525 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088545 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-client\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088590 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-config\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5zr\" (UniqueName: \"kubernetes.io/projected/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-kube-api-access-cd5zr\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-encryption-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088720 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088760 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgvz\" (UniqueName: \"kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541f325d-4d87-4e4f-93c6-11a1d8406c3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088807 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-service-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg5z\" (UniqueName: \"kubernetes.io/projected/7adef629-f94a-4968-bd8a-89b01e4f9859-kube-api-access-swg5z\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2459f53c-c84d-439c-8da4-4dea0f42a2e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2459f53c-c84d-439c-8da4-4dea0f42a2e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb52\" (UniqueName: \"kubernetes.io/projected/9ca2b5f9-6396-4e66-82df-35688788b49f-kube-api-access-xbb52\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088943 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088964 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.088988 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-serving-cert\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089013 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089034 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a1453c-5fd6-494f-ba7c-af218220ede9-serving-cert\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089056 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089080 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-config\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089122 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9556dbe-5bce-417e-84f1-1e7ed221d383-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089147 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-machine-approver-tls\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089192 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-policies\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089212 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089237 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqss\" (UniqueName: \"kubernetes.io/projected/b23776ac-7592-4f59-8b10-27d85e19ae81-kube-api-access-kfqss\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-serving-cert\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.089305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.073995 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ca2b5f9-6396-4e66-82df-35688788b49f-trusted-ca\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.071899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-node-pullsecrets\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.073141 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.090691 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.074733 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-auth-proxy-config\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.092056 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-audit-dir\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.093694 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.110763 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-audit-policies\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.093987 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.094433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.102321 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.110353 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2459f53c-c84d-439c-8da4-4dea0f42a2e3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.111930 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.112698 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.113452 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.114284 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.093926 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.114439 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.114519 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.114626 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.115469 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.116348 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.095263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-serving-ca\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.118212 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45pf9"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.119699 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6ft79"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.120146 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.120994 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.121085 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.121550 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bfz2g"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.121840 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.122060 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.122325 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.124070 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.126921 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-encryption-config\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.128339 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vxw96"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.129669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.129853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-serving-cert\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.130161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.130204 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-encryption-config\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.130508 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-config\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.130556 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.130904 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-etcd-client\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.131311 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ca2b5f9-6396-4e66-82df-35688788b49f-serving-cert\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.125528 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2459f53c-c84d-439c-8da4-4dea0f42a2e3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.131888 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.132115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-etcd-client\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.132855 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.135530 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vspn6"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.135582 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.136140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.136212 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95a1453c-5fd6-494f-ba7c-af218220ede9-serving-cert\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.136905 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.136905 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.136950 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.137239 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-serving-cert\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.138204 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.138844 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-machine-approver-tls\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.145030 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9nd28"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.146357 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.148261 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.148528 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.154015 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.154076 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6jdt9"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.154088 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.155782 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9xbs"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.156486 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.157533 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.158559 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.160671 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.160726 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.162328 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.163408 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.164452 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.165853 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.167219 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.167822 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.168927 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.170227 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t6z28"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.172753 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.173886 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.175619 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dcxhn"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.176905 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.177960 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.179930 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.181668 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.182878 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.184408 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6qrtn"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.187350 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45pf9"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.187653 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.188702 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190330 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-serving-cert\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190383 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-config\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190423 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190444 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6ft79"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190455 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9556dbe-5bce-417e-84f1-1e7ed221d383-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqss\" (UniqueName: \"kubernetes.io/projected/b23776ac-7592-4f59-8b10-27d85e19ae81-kube-api-access-kfqss\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190705 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190781 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/213ea898-4f81-49ad-afaa-000178b8d1f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22sr\" (UniqueName: \"kubernetes.io/projected/6f650a2d-90e1-465d-bc6b-1451aa0099ac-kube-api-access-f22sr\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190851 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541f325d-4d87-4e4f-93c6-11a1d8406c3e-config\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23776ac-7592-4f59-8b10-27d85e19ae81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9556dbe-5bce-417e-84f1-1e7ed221d383-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.190957 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrv7\" (UniqueName: \"kubernetes.io/projected/24dbec1e-8bd6-471c-bda5-ac5df4a771a1-kube-api-access-hvrv7\") pod \"downloads-7954f5f757-t6z28\" (UID: \"24dbec1e-8bd6-471c-bda5-ac5df4a771a1\") " pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191032 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-service-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-config\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191125 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr86n\" (UniqueName: \"kubernetes.io/projected/213ea898-4f81-49ad-afaa-000178b8d1f4-kube-api-access-mr86n\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7adef629-f94a-4968-bd8a-89b01e4f9859-serving-cert\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191267 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.191992 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh9hw\" (UniqueName: \"kubernetes.io/projected/651f3b4e-974b-4637-a09c-2e7b95407c8f-kube-api-access-fh9hw\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.192038 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9556dbe-5bce-417e-84f1-1e7ed221d383-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.192065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-srv-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.192190 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-65pp7"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.192329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541f325d-4d87-4e4f-93c6-11a1d8406c3e-config\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.192944 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wl44\" (UniqueName: \"kubernetes.io/projected/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-kube-api-access-6wl44\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193096 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23776ac-7592-4f59-8b10-27d85e19ae81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193181 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f650a2d-90e1-465d-bc6b-1451aa0099ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193280 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9556dbe-5bce-417e-84f1-1e7ed221d383-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193362 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxb8\" (UniqueName: \"kubernetes.io/projected/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-kube-api-access-fqxb8\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193582 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193720 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/541f325d-4d87-4e4f-93c6-11a1d8406c3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-client\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194067 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194191 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgvz\" (UniqueName: \"kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194275 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541f325d-4d87-4e4f-93c6-11a1d8406c3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-service-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swg5z\" (UniqueName: \"kubernetes.io/projected/7adef629-f94a-4968-bd8a-89b01e4f9859-kube-api-access-swg5z\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193651 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-brznm"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.194704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-service-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.193971 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b23776ac-7592-4f59-8b10-27d85e19ae81-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.195262 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.195386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-service-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.195621 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-config\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.195851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7adef629-f94a-4968-bd8a-89b01e4f9859-serving-cert\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7adef629-f94a-4968-bd8a-89b01e4f9859-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196262 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9556dbe-5bce-417e-84f1-1e7ed221d383-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196643 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-65pp7"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196796 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196811 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-brznm"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.196914 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.197036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.197714 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lwmxt"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.198779 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b23776ac-7592-4f59-8b10-27d85e19ae81-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.198812 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.198851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f650a2d-90e1-465d-bc6b-1451aa0099ac-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.198964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541f325d-4d87-4e4f-93c6-11a1d8406c3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.199542 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lwmxt"] Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.199826 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/213ea898-4f81-49ad-afaa-000178b8d1f4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.200319 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.201173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.207729 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.214293 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.227445 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.233736 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.252660 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.265929 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.267849 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.287101 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.296461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-srv-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.296548 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxb8\" (UniqueName: \"kubernetes.io/projected/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-kube-api-access-fqxb8\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.296583 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.296779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvrv7\" (UniqueName: \"kubernetes.io/projected/24dbec1e-8bd6-471c-bda5-ac5df4a771a1-kube-api-access-hvrv7\") pod \"downloads-7954f5f757-t6z28\" (UID: \"24dbec1e-8bd6-471c-bda5-ac5df4a771a1\") " pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.298357 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-client\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.308019 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.327013 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.334030 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.347799 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.353484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-config\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.369180 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.388188 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.392057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/651f3b4e-974b-4637-a09c-2e7b95407c8f-etcd-ca\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.408908 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.417054 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651f3b4e-974b-4637-a09c-2e7b95407c8f-serving-cert\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.447680 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.472919 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.488150 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.507250 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.536054 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.548277 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.567328 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.587944 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.607731 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.629007 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.648668 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.667811 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.688642 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.707070 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.727699 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.747852 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.768466 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.787829 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.800144 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.807120 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.820303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-srv-cert\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.827643 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.847840 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.868034 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.887997 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.907754 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.927496 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.948847 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.968346 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Sep 30 17:18:02 crc kubenswrapper[4688]: I0930 17:18:02.988344 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.028315 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d78qt\" (UniqueName: \"kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt\") pod \"route-controller-manager-6576b87f9c-q828v\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.050862 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj2hw\" (UniqueName: \"kubernetes.io/projected/2459f53c-c84d-439c-8da4-4dea0f42a2e3-kube-api-access-dj2hw\") pod \"openshift-apiserver-operator-796bbdcf4f-n4pgj\" (UID: \"2459f53c-c84d-439c-8da4-4dea0f42a2e3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.063027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs8wc\" (UniqueName: \"kubernetes.io/projected/d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef-kube-api-access-gs8wc\") pod \"apiserver-7bbb656c7d-f2b5c\" (UID: \"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.085855 4688 request.go:700] Waited for 1.014890385s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.086828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9rl\" (UniqueName: \"kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl\") pod \"oauth-openshift-558db77b4-7znjl\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.115665 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7\") pod \"controller-manager-879f6c89f-qkj9n\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.126247 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl9n\" (UniqueName: \"kubernetes.io/projected/95a1453c-5fd6-494f-ba7c-af218220ede9-kube-api-access-lbl9n\") pod \"openshift-config-operator-7777fb866f-vspn6\" (UID: \"95a1453c-5fd6-494f-ba7c-af218220ede9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.161323 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk5ll\" (UniqueName: \"kubernetes.io/projected/2d4b9d3a-9096-4b80-91fa-778ea8d5599a-kube-api-access-fk5ll\") pod \"apiserver-76f77b778f-vxw96\" (UID: \"2d4b9d3a-9096-4b80-91fa-778ea8d5599a\") " pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.173223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gx4w\" (UniqueName: \"kubernetes.io/projected/8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda-kube-api-access-7gx4w\") pod \"machine-api-operator-5694c8668f-tfh56\" (UID: \"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.177078 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.184678 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5zr\" (UniqueName: \"kubernetes.io/projected/1fb1e11f-d437-42db-9fd2-c15e98b8b8ef-kube-api-access-cd5zr\") pod \"machine-approver-56656f9798-dr6nv\" (UID: \"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.204490 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb52\" (UniqueName: \"kubernetes.io/projected/9ca2b5f9-6396-4e66-82df-35688788b49f-kube-api-access-xbb52\") pod \"console-operator-58897d9998-bfz2g\" (UID: \"9ca2b5f9-6396-4e66-82df-35688788b49f\") " pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.208118 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.226020 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.228460 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.238928 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.248504 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.249108 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.266843 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.268683 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.292370 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.293124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.311412 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.323940 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.333924 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.351106 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.363539 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.369186 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.391172 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.408349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.411029 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.429360 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.438096 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.439829 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.451117 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.462114 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tfh56"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.467843 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.487776 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.508800 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.528787 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.552798 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vspn6"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.560943 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.569757 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.587783 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.592056 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.608117 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.628347 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.633679 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.647969 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.667102 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.688796 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.705057 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.708031 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.712243 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bfz2g"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.722515 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vxw96"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.728350 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.748095 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.767281 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.787373 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.808654 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.828274 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.829234 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c"] Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.850960 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.878395 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.908369 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.929039 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.948918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Sep 30 17:18:03 crc kubenswrapper[4688]: I0930 17:18:03.990453 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqss\" (UniqueName: \"kubernetes.io/projected/b23776ac-7592-4f59-8b10-27d85e19ae81-kube-api-access-kfqss\") pod \"kube-storage-version-migrator-operator-b67b599dd-dbtg8\" (UID: \"b23776ac-7592-4f59-8b10-27d85e19ae81\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.008517 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22sr\" (UniqueName: \"kubernetes.io/projected/6f650a2d-90e1-465d-bc6b-1451aa0099ac-kube-api-access-f22sr\") pod \"control-plane-machine-set-operator-78cbb6b69f-cfd84\" (UID: \"6f650a2d-90e1-465d-bc6b-1451aa0099ac\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.025852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr86n\" (UniqueName: \"kubernetes.io/projected/213ea898-4f81-49ad-afaa-000178b8d1f4-kube-api-access-mr86n\") pod \"cluster-samples-operator-665b6dd947-rz9sh\" (UID: \"213ea898-4f81-49ad-afaa-000178b8d1f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.041650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh9hw\" (UniqueName: \"kubernetes.io/projected/651f3b4e-974b-4637-a09c-2e7b95407c8f-kube-api-access-fh9hw\") pod \"etcd-operator-b45778765-dcxhn\" (UID: \"651f3b4e-974b-4637-a09c-2e7b95407c8f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.042186 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.062207 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wl44\" (UniqueName: \"kubernetes.io/projected/b0dd035e-5b22-4ac8-8cfd-a08d50e5e311-kube-api-access-6wl44\") pod \"openshift-controller-manager-operator-756b6f6bc6-fhqnk\" (UID: \"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.073949 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.083716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9556dbe-5bce-417e-84f1-1e7ed221d383-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-422bt\" (UID: \"c9556dbe-5bce-417e-84f1-1e7ed221d383\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.086445 4688 request.go:700] Waited for 1.892496078s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.106022 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/541f325d-4d87-4e4f-93c6-11a1d8406c3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bl66m\" (UID: \"541f325d-4d87-4e4f-93c6-11a1d8406c3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.127902 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgvz\" (UniqueName: \"kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz\") pod \"console-f9d7485db-wl7nh\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.147823 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swg5z\" (UniqueName: \"kubernetes.io/projected/7adef629-f94a-4968-bd8a-89b01e4f9859-kube-api-access-swg5z\") pod \"authentication-operator-69f744f599-6jdt9\" (UID: \"7adef629-f94a-4968-bd8a-89b01e4f9859\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.148147 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.168440 4688 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.187692 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.210471 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.228577 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.248094 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.258505 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.267210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" event={"ID":"a7575a89-fbed-4028-a04f-2ab40e7a9081","Type":"ContainerStarted","Data":"8fca22d5ce5fbb61a7f38dc497985af48e252ec9bd2b6a30ca4157c47b48e37a"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.267298 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" event={"ID":"a7575a89-fbed-4028-a04f-2ab40e7a9081","Type":"ContainerStarted","Data":"1738f9cdba19375f304c7d2a61983ff5010c50541e5b816ec59f648a9af84d3c"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.267474 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.268622 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.269485 4688 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qkj9n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.269527 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.272578 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.272584 4688 generic.go:334] "Generic (PLEG): container finished" podID="2d4b9d3a-9096-4b80-91fa-778ea8d5599a" containerID="6b5948c90d902f4d89eeb9f9301a8e623d592c25877df048cc7d1c1e2955c2fd" exitCode=0 Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.272750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" event={"ID":"2d4b9d3a-9096-4b80-91fa-778ea8d5599a","Type":"ContainerDied","Data":"6b5948c90d902f4d89eeb9f9301a8e623d592c25877df048cc7d1c1e2955c2fd"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.272791 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" event={"ID":"2d4b9d3a-9096-4b80-91fa-778ea8d5599a","Type":"ContainerStarted","Data":"117543f77d613f629b7e972dff81349822fa80d0ce5af0b71a8757083379619a"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.277503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" event={"ID":"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef","Type":"ContainerStarted","Data":"53e4b5ada31ba35a32485e1b7a3e6060a215ca5a5bc8aa196d6901c5944703d9"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.278529 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.288245 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.289913 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.293331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" event={"ID":"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda","Type":"ContainerStarted","Data":"ccdaf404932e54924eca219490b6582bce55e55199fa4dfc0fb56af0675f3f7b"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.293381 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" event={"ID":"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda","Type":"ContainerStarted","Data":"49ee93d843b2bb91b62a322abeb6366792757bc83d20f057074e54cbda42ccdd"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.293412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" event={"ID":"8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda","Type":"ContainerStarted","Data":"ccba8d6e8fddfcc9316ecde7ae8c50762452248450d49ef639a57660c3dad553"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.295601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" event={"ID":"2459f53c-c84d-439c-8da4-4dea0f42a2e3","Type":"ContainerStarted","Data":"47df6e4d5472db03ac937b176a43d4c4630f86a1c0120e2a77aa48963c1419bc"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.295705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" event={"ID":"2459f53c-c84d-439c-8da4-4dea0f42a2e3","Type":"ContainerStarted","Data":"278bbea4d192ae753f0560b22ef057aa56094ea5636d01d19b1fc0ac0c2d85b1"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.310280 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.314018 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dcxhn"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.315879 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" event={"ID":"7254866f-2108-4fc6-827a-5e177b48259d","Type":"ContainerStarted","Data":"fb1129a61d6eb266910d39dad03881ea549772222aa139e34fb234fa8baefe57"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.315948 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" event={"ID":"7254866f-2108-4fc6-827a-5e177b48259d","Type":"ContainerStarted","Data":"bc4e440fea841952de6f5e7908cdf9efea6eb505f06c7798c6795ccfa6a9a290"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.317038 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.318191 4688 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7znjl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.318237 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" podUID="7254866f-2108-4fc6-827a-5e177b48259d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.319274 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" event={"ID":"9ca2b5f9-6396-4e66-82df-35688788b49f","Type":"ContainerStarted","Data":"a104f32547a7095d0e9fc746857248cae1c4675cd10d6965e6e699248ea893ff"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.319305 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" event={"ID":"9ca2b5f9-6396-4e66-82df-35688788b49f","Type":"ContainerStarted","Data":"7040bd13d01b556de8cbf2c4cc3b1629a51b3a5896ef10d6f8dc3084cceb2f45"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.320085 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.320712 4688 patch_prober.go:28] interesting pod/console-operator-58897d9998-bfz2g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.320765 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" podUID="9ca2b5f9-6396-4e66-82df-35688788b49f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.325107 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.329520 4688 generic.go:334] "Generic (PLEG): container finished" podID="95a1453c-5fd6-494f-ba7c-af218220ede9" containerID="25c77d65d942bef9da0a253daf666e00800f2f1b1374adda7acf2b81bee3f726" exitCode=0 Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.329639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" event={"ID":"95a1453c-5fd6-494f-ba7c-af218220ede9","Type":"ContainerDied","Data":"25c77d65d942bef9da0a253daf666e00800f2f1b1374adda7acf2b81bee3f726"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.329677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" event={"ID":"95a1453c-5fd6-494f-ba7c-af218220ede9","Type":"ContainerStarted","Data":"3459620c835c09459407816417f54829856aeb75dc803c12d68c002fa6e8918b"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.330654 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.331742 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.333027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" event={"ID":"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef","Type":"ContainerStarted","Data":"8b742f4d0136304a5b78e355baa1456f116f6aac7ae87efad940c041c1a7d362"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.333064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" event={"ID":"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef","Type":"ContainerStarted","Data":"0aa48e6c24300a2e2ad8a0faa3446a959443bae316ea3c8df0069efddaad6bdc"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.335666 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" event={"ID":"4f4fc367-8dbb-4df3-9fa6-564810f36316","Type":"ContainerStarted","Data":"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.335700 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" event={"ID":"4f4fc367-8dbb-4df3-9fa6-564810f36316","Type":"ContainerStarted","Data":"a6583c45533ad0cc83fa6d229d9a67391e6f198c866816df86bcb2b7e68945f2"} Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.336191 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.338782 4688 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q828v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.338827 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.354100 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.363439 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.367979 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvrv7\" (UniqueName: \"kubernetes.io/projected/24dbec1e-8bd6-471c-bda5-ac5df4a771a1-kube-api-access-hvrv7\") pod \"downloads-7954f5f757-t6z28\" (UID: \"24dbec1e-8bd6-471c-bda5-ac5df4a771a1\") " pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.383457 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:04 crc kubenswrapper[4688]: W0930 17:18:04.385173 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651f3b4e_974b_4637_a09c_2e7b95407c8f.slice/crio-4d40670cfc85ce825faf20f2039345d60c0d194d2c0d8d8601e8e3eb220a48e5 WatchSource:0}: Error finding container 4d40670cfc85ce825faf20f2039345d60c0d194d2c0d8d8601e8e3eb220a48e5: Status 404 returned error can't find the container with id 4d40670cfc85ce825faf20f2039345d60c0d194d2c0d8d8601e8e3eb220a48e5 Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.385914 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxb8\" (UniqueName: \"kubernetes.io/projected/25f3ac70-1e22-4b7c-ba7d-2e952da0c248-kube-api-access-fqxb8\") pod \"olm-operator-6b444d44fb-ttss4\" (UID: \"25f3ac70-1e22-4b7c-ba7d-2e952da0c248\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.414835 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-cabundle\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw5l\" (UniqueName: \"kubernetes.io/projected/645a26b3-241a-4738-8654-689e2dd6b93a-kube-api-access-jrw5l\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427269 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ggg4\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427323 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427345 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427366 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53f98e9c-2118-4997-b893-dd6744089ce9-proxy-tls\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427451 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wfg\" (UniqueName: \"kubernetes.io/projected/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-kube-api-access-h5wfg\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427530 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-images\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427583 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427713 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nmw\" (UniqueName: \"kubernetes.io/projected/fc656260-bc8c-4a9d-b6cb-ddea5e161058-kube-api-access-q8nmw\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e76da010-cdc9-41ad-82a5-19ecb9a7e278-metrics-tls\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427785 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-default-certificate\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427809 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645a26b3-241a-4738-8654-689e2dd6b93a-config\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427861 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l92xb\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-kube-api-access-l92xb\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.427963 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-apiservice-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.428043 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71af9722-bf72-4336-9e21-4fe59401c4c8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.428231 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.428313 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4fd\" (UniqueName: \"kubernetes.io/projected/012d15a1-9240-4170-be9c-5578b1036190-kube-api-access-vs4fd\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.428684 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3986b0-0f8b-4404-aefd-f8a0510e1150-tmpfs\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.430310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c3ebb64-9996-4f07-a582-bebc5edeae3e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.430815 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-srv-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.431504 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75scv\" (UniqueName: \"kubernetes.io/projected/22bdca4c-6929-427a-93bd-6b39d5b1376d-kube-api-access-75scv\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.431662 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.431798 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/52fbf822-f7a7-4048-aef9-493aa1989f48-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.431879 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645a26b3-241a-4738-8654-689e2dd6b93a-serving-cert\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.432099 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.432143 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-config\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.432452 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-stats-auth\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.432634 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.432870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpv7b\" (UniqueName: \"kubernetes.io/projected/5b3986b0-0f8b-4404-aefd-f8a0510e1150-kube-api-access-gpv7b\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.433037 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2f9\" (UniqueName: \"kubernetes.io/projected/e5286b99-2b69-4447-9137-a65e72ed5d3a-kube-api-access-xl2f9\") pod \"migrator-59844c95c7-fbxbf\" (UID: \"e5286b99-2b69-4447-9137-a65e72ed5d3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.433147 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52fbf822-f7a7-4048-aef9-493aa1989f48-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.433218 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qp2\" (UniqueName: \"kubernetes.io/projected/71af9722-bf72-4336-9e21-4fe59401c4c8-kube-api-access-p4qp2\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.433761 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-webhook-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.434004 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.434963 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7n7x\" (UniqueName: \"kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435075 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-key\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435102 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8klt2\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-kube-api-access-8klt2\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhm9q\" (UniqueName: \"kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435577 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.435649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdnnx\" (UniqueName: \"kubernetes.io/projected/53f98e9c-2118-4997-b893-dd6744089ce9-kube-api-access-qdnnx\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-metrics-certs\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437112 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z697s\" (UniqueName: \"kubernetes.io/projected/1c3ebb64-9996-4f07-a582-bebc5edeae3e-kube-api-access-z697s\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437381 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71af9722-bf72-4336-9e21-4fe59401c4c8-proxy-tls\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437408 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437431 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpfl\" (UniqueName: \"kubernetes.io/projected/e76da010-cdc9-41ad-82a5-19ecb9a7e278-kube-api-access-srpfl\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437451 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22bdca4c-6929-427a-93bd-6b39d5b1376d-service-ca-bundle\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437486 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.437536 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.439615 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:04.939594903 +0000 UTC m=+142.235032461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.540523 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.541054 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.041022716 +0000 UTC m=+142.336460274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.541528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpv7b\" (UniqueName: \"kubernetes.io/projected/5b3986b0-0f8b-4404-aefd-f8a0510e1150-kube-api-access-gpv7b\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2f9\" (UniqueName: \"kubernetes.io/projected/e5286b99-2b69-4447-9137-a65e72ed5d3a-kube-api-access-xl2f9\") pod \"migrator-59844c95c7-fbxbf\" (UID: \"e5286b99-2b69-4447-9137-a65e72ed5d3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52fbf822-f7a7-4048-aef9-493aa1989f48-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qp2\" (UniqueName: \"kubernetes.io/projected/71af9722-bf72-4336-9e21-4fe59401c4c8-kube-api-access-p4qp2\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544871 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-webhook-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544893 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544918 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7n7x\" (UniqueName: \"kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544941 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-key\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.544982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8klt2\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-kube-api-access-8klt2\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhm9q\" (UniqueName: \"kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545029 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545051 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdnnx\" (UniqueName: \"kubernetes.io/projected/53f98e9c-2118-4997-b893-dd6744089ce9-kube-api-access-qdnnx\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-metrics-certs\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545094 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z697s\" (UniqueName: \"kubernetes.io/projected/1c3ebb64-9996-4f07-a582-bebc5edeae3e-kube-api-access-z697s\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545145 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71af9722-bf72-4336-9e21-4fe59401c4c8-proxy-tls\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545172 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpfl\" (UniqueName: \"kubernetes.io/projected/e76da010-cdc9-41ad-82a5-19ecb9a7e278-kube-api-access-srpfl\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545196 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22bdca4c-6929-427a-93bd-6b39d5b1376d-service-ca-bundle\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545217 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545247 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-cabundle\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrw5l\" (UniqueName: \"kubernetes.io/projected/645a26b3-241a-4738-8654-689e2dd6b93a-kube-api-access-jrw5l\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545356 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ggg4\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545375 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53f98e9c-2118-4997-b893-dd6744089ce9-proxy-tls\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545457 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wfg\" (UniqueName: \"kubernetes.io/projected/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-kube-api-access-h5wfg\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-images\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545499 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545519 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545599 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nmw\" (UniqueName: \"kubernetes.io/projected/fc656260-bc8c-4a9d-b6cb-ddea5e161058-kube-api-access-q8nmw\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e76da010-cdc9-41ad-82a5-19ecb9a7e278-metrics-tls\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-default-certificate\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545671 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645a26b3-241a-4738-8654-689e2dd6b93a-config\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545763 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l92xb\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-kube-api-access-l92xb\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545795 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-apiservice-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545818 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71af9722-bf72-4336-9e21-4fe59401c4c8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4fd\" (UniqueName: \"kubernetes.io/projected/012d15a1-9240-4170-be9c-5578b1036190-kube-api-access-vs4fd\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545881 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3986b0-0f8b-4404-aefd-f8a0510e1150-tmpfs\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c3ebb64-9996-4f07-a582-bebc5edeae3e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545929 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-srv-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75scv\" (UniqueName: \"kubernetes.io/projected/22bdca4c-6929-427a-93bd-6b39d5b1376d-kube-api-access-75scv\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.545980 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/52fbf822-f7a7-4048-aef9-493aa1989f48-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645a26b3-241a-4738-8654-689e2dd6b93a-serving-cert\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546050 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-config\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546095 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-stats-auth\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.546538 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.547181 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645a26b3-241a-4738-8654-689e2dd6b93a-config\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.547682 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.547888 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.549397 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/52fbf822-f7a7-4048-aef9-493aa1989f48-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.563228 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53f98e9c-2118-4997-b893-dd6744089ce9-images\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.564327 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.565749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22bdca4c-6929-427a-93bd-6b39d5b1376d-service-ca-bundle\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.568403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.569458 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-metrics-certs\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.570303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-trusted-ca\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.571151 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.071129714 +0000 UTC m=+142.366567272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.572278 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-cabundle\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.572841 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3986b0-0f8b-4404-aefd-f8a0510e1150-tmpfs\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.573829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.574779 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-apiservice-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.574907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-profile-collector-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.577546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71af9722-bf72-4336-9e21-4fe59401c4c8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.578373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3986b0-0f8b-4404-aefd-f8a0510e1150-webhook-cert\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.579513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-default-certificate\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.579964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.580368 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-config\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.580163 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.581794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.582071 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fc656260-bc8c-4a9d-b6cb-ddea5e161058-signing-key\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.583792 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-metrics-tls\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.584329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/53f98e9c-2118-4997-b893-dd6744089ce9-proxy-tls\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.585453 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e76da010-cdc9-41ad-82a5-19ecb9a7e278-metrics-tls\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.584898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.585474 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.586772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/22bdca4c-6929-427a-93bd-6b39d5b1376d-stats-auth\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.587320 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012d15a1-9240-4170-be9c-5578b1036190-srv-cert\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.587519 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71af9722-bf72-4336-9e21-4fe59401c4c8-proxy-tls\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.591652 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/52fbf822-f7a7-4048-aef9-493aa1989f48-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.594416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1c3ebb64-9996-4f07-a582-bebc5edeae3e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.599467 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpv7b\" (UniqueName: \"kubernetes.io/projected/5b3986b0-0f8b-4404-aefd-f8a0510e1150-kube-api-access-gpv7b\") pod \"packageserver-d55dfcdfc-68dks\" (UID: \"5b3986b0-0f8b-4404-aefd-f8a0510e1150\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.600241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.618133 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.618677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645a26b3-241a-4738-8654-689e2dd6b93a-serving-cert\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.627136 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2f9\" (UniqueName: \"kubernetes.io/projected/e5286b99-2b69-4447-9137-a65e72ed5d3a-kube-api-access-xl2f9\") pod \"migrator-59844c95c7-fbxbf\" (UID: \"e5286b99-2b69-4447-9137-a65e72ed5d3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.633936 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b3787c4-3ac4-4a95-b946-6fe5e8d91b94-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mjln6\" (UID: \"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.655412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l92xb\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-kube-api-access-l92xb\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.662976 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgksw\" (UniqueName: \"kubernetes.io/projected/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-kube-api-access-wgksw\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663437 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-plugins-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663456 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-metrics-tls\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663471 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-socket-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rgtq\" (UniqueName: \"kubernetes.io/projected/dc3db99d-3df2-4b2c-8303-8366c0165cd6-kube-api-access-4rgtq\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663587 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qp2\" (UniqueName: \"kubernetes.io/projected/71af9722-bf72-4336-9e21-4fe59401c4c8-kube-api-access-p4qp2\") pod \"machine-config-controller-84d6567774-pgbkf\" (UID: \"71af9722-bf72-4336-9e21-4fe59401c4c8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663757 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645m5\" (UniqueName: \"kubernetes.io/projected/874fcfcf-2899-4b9d-864b-c9a35a52f214-kube-api-access-645m5\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663783 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-mountpoint-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663836 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-certs\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663875 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-node-bootstrap-token\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b13f5778-4f7d-4528-baf7-cfbbb712ea77-cert\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663945 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-registration-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.663993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvzg\" (UniqueName: \"kubernetes.io/projected/b13f5778-4f7d-4528-baf7-cfbbb712ea77-kube-api-access-6mvzg\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.664052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-csi-data-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.664080 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-config-volume\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.664684 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.164661061 +0000 UTC m=+142.460098619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.696793 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wfg\" (UniqueName: \"kubernetes.io/projected/fc255fcb-01db-4aed-b6d4-89f4c5555dd6-kube-api-access-h5wfg\") pod \"package-server-manager-789f6589d5-rmr2p\" (UID: \"fc255fcb-01db-4aed-b6d4-89f4c5555dd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.708862 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.722570 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.725345 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.731382 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z697s\" (UniqueName: \"kubernetes.io/projected/1c3ebb64-9996-4f07-a582-bebc5edeae3e-kube-api-access-z697s\") pod \"multus-admission-controller-857f4d67dd-d9xbs\" (UID: \"1c3ebb64-9996-4f07-a582-bebc5edeae3e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.737296 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.752816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7n7x\" (UniqueName: \"kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x\") pod \"marketplace-operator-79b997595-smnqd\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.764960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rgtq\" (UniqueName: \"kubernetes.io/projected/dc3db99d-3df2-4b2c-8303-8366c0165cd6-kube-api-access-4rgtq\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765070 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765107 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645m5\" (UniqueName: \"kubernetes.io/projected/874fcfcf-2899-4b9d-864b-c9a35a52f214-kube-api-access-645m5\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-mountpoint-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765152 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-certs\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765299 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-node-bootstrap-token\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765327 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b13f5778-4f7d-4528-baf7-cfbbb712ea77-cert\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765365 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-registration-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765393 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvzg\" (UniqueName: \"kubernetes.io/projected/b13f5778-4f7d-4528-baf7-cfbbb712ea77-kube-api-access-6mvzg\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-csi-data-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765458 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-config-volume\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765485 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgksw\" (UniqueName: \"kubernetes.io/projected/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-kube-api-access-wgksw\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-plugins-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765542 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-metrics-tls\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765557 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-socket-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.765876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-socket-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.766108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-registration-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.766362 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-csi-data-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.766597 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.266562866 +0000 UTC m=+142.562000424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.767086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-mountpoint-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.767350 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-config-volume\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.767460 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/874fcfcf-2899-4b9d-864b-c9a35a52f214-plugins-dir\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.771428 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdnnx\" (UniqueName: \"kubernetes.io/projected/53f98e9c-2118-4997-b893-dd6744089ce9-kube-api-access-qdnnx\") pod \"machine-config-operator-74547568cd-nz5qz\" (UID: \"53f98e9c-2118-4997-b893-dd6744089ce9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.771984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-certs\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.778387 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.782514 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.789631 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.792062 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75scv\" (UniqueName: \"kubernetes.io/projected/22bdca4c-6929-427a-93bd-6b39d5b1376d-kube-api-access-75scv\") pod \"router-default-5444994796-bhb8d\" (UID: \"22bdca4c-6929-427a-93bd-6b39d5b1376d\") " pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.795130 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-metrics-tls\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.797558 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc3db99d-3df2-4b2c-8303-8366c0165cd6-node-bootstrap-token\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.797836 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.802186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b13f5778-4f7d-4528-baf7-cfbbb712ea77-cert\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.812295 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.824956 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.838414 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/52fbf822-f7a7-4048-aef9-493aa1989f48-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phl9k\" (UID: \"52fbf822-f7a7-4048-aef9-493aa1989f48\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.849310 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrw5l\" (UniqueName: \"kubernetes.io/projected/645a26b3-241a-4738-8654-689e2dd6b93a-kube-api-access-jrw5l\") pod \"service-ca-operator-777779d784-6ft79\" (UID: \"645a26b3-241a-4738-8654-689e2dd6b93a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.867338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.867414 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.367384643 +0000 UTC m=+142.662822201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.867818 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.868628 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.368609025 +0000 UTC m=+142.664046573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.868750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4fd\" (UniqueName: \"kubernetes.io/projected/012d15a1-9240-4170-be9c-5578b1036190-kube-api-access-vs4fd\") pod \"catalog-operator-68c6474976-5qlst\" (UID: \"012d15a1-9240-4170-be9c-5578b1036190\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.871027 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6jdt9"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.883024 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84"] Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.919071 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpfl\" (UniqueName: \"kubernetes.io/projected/e76da010-cdc9-41ad-82a5-19ecb9a7e278-kube-api-access-srpfl\") pod \"dns-operator-744455d44c-6qrtn\" (UID: \"e76da010-cdc9-41ad-82a5-19ecb9a7e278\") " pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.921863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ggg4\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.964386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8klt2\" (UniqueName: \"kubernetes.io/projected/2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9-kube-api-access-8klt2\") pod \"ingress-operator-5b745b69d9-6v2mg\" (UID: \"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.968991 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:04 crc kubenswrapper[4688]: E0930 17:18:04.969650 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.469623227 +0000 UTC m=+142.765060775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.977248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nmw\" (UniqueName: \"kubernetes.io/projected/fc656260-bc8c-4a9d-b6cb-ddea5e161058-kube-api-access-q8nmw\") pod \"service-ca-9c57cc56f-45pf9\" (UID: \"fc656260-bc8c-4a9d-b6cb-ddea5e161058\") " pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.990616 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhm9q\" (UniqueName: \"kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q\") pod \"collect-profiles-29320875-n5ljd\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:04 crc kubenswrapper[4688]: I0930 17:18:04.990971 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.000857 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.003800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rgtq\" (UniqueName: \"kubernetes.io/projected/dc3db99d-3df2-4b2c-8303-8366c0165cd6-kube-api-access-4rgtq\") pod \"machine-config-server-9nd28\" (UID: \"dc3db99d-3df2-4b2c-8303-8366c0165cd6\") " pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.028809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvzg\" (UniqueName: \"kubernetes.io/projected/b13f5778-4f7d-4528-baf7-cfbbb712ea77-kube-api-access-6mvzg\") pod \"ingress-canary-lwmxt\" (UID: \"b13f5778-4f7d-4528-baf7-cfbbb712ea77\") " pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.029415 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.045000 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.049967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645m5\" (UniqueName: \"kubernetes.io/projected/874fcfcf-2899-4b9d-864b-c9a35a52f214-kube-api-access-645m5\") pod \"csi-hostpathplugin-brznm\" (UID: \"874fcfcf-2899-4b9d-864b-c9a35a52f214\") " pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.053965 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.059424 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.061507 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.071686 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.072215 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.57220037 +0000 UTC m=+142.867637928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.073970 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgksw\" (UniqueName: \"kubernetes.io/projected/5ddf43f5-3af1-4de7-b474-37dc49c3ab48-kube-api-access-wgksw\") pod \"dns-default-65pp7\" (UID: \"5ddf43f5-3af1-4de7-b474-37dc49c3ab48\") " pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.098168 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.113951 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.123008 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9nd28" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.160161 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-brznm" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.178307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.178361 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t6z28"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.178714 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.179032 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.179500 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.679473136 +0000 UTC m=+142.974910694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.179618 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lwmxt" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.193154 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.294232 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.294736 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.79471665 +0000 UTC m=+143.090154218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: W0930 17:18:05.346627 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541f325d_4d87_4e4f_93c6_11a1d8406c3e.slice/crio-a9dd2c4f99a250591beaa66f1bb76b8077a75c5c952db802fb7e3e5165c7bd97 WatchSource:0}: Error finding container a9dd2c4f99a250591beaa66f1bb76b8077a75c5c952db802fb7e3e5165c7bd97: Status 404 returned error can't find the container with id a9dd2c4f99a250591beaa66f1bb76b8077a75c5c952db802fb7e3e5165c7bd97 Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.397235 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.398526 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:05.898479774 +0000 UTC m=+143.193917332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.504541 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.505137 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.005116853 +0000 UTC m=+143.300554411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.544622 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" event={"ID":"95a1453c-5fd6-494f-ba7c-af218220ede9","Type":"ContainerStarted","Data":"ed575f788fc6000b43714a3666979eff42e3feecc0785271c8d2c0633b110fe6"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.544676 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.544696 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.544710 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" event={"ID":"1fb1e11f-d437-42db-9fd2-c15e98b8b8ef","Type":"ContainerStarted","Data":"bd82eba9063a7b915575a48c6d87878697176333c7b54b3852618f3febe9a42f"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.544724 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.564705 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p"] Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.566185 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t6z28" event={"ID":"24dbec1e-8bd6-471c-bda5-ac5df4a771a1","Type":"ContainerStarted","Data":"1b916f2c12be763ef989c966753662461ba62568e92ea776f34d44221781f502"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.579487 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" event={"ID":"6f650a2d-90e1-465d-bc6b-1451aa0099ac","Type":"ContainerStarted","Data":"4b1a667a73787a2c3b1b9cfcfc978d6e12a2430562bc4d26dd591d01e31fd31e"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.583295 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" event={"ID":"b23776ac-7592-4f59-8b10-27d85e19ae81","Type":"ContainerStarted","Data":"9303710b4d7adf3f27dad3ec2c4a3768c4ba5dead8b83a139f92ad230ddf64b1"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.583535 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" event={"ID":"b23776ac-7592-4f59-8b10-27d85e19ae81","Type":"ContainerStarted","Data":"5c6b8a4dd10f5675c53fb6ed80302be8540554c3ad19255f5df44ae5824dde36"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.606562 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.606857 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.106820373 +0000 UTC m=+143.402257931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.607110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.608028 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.108012134 +0000 UTC m=+143.403449692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.625498 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" event={"ID":"213ea898-4f81-49ad-afaa-000178b8d1f4","Type":"ContainerStarted","Data":"83b479fd845e43958f56f4ba4b82ce141d9e7999bb49daeff2144fcee93ed245"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.625553 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" event={"ID":"213ea898-4f81-49ad-afaa-000178b8d1f4","Type":"ContainerStarted","Data":"828ef15c6c6e231cdbd8fa232dde9583946aaac90ece662815c15de233004eb8"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.627250 4688 generic.go:334] "Generic (PLEG): container finished" podID="d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef" containerID="0fd75ae841c6a88a4bee4af8191615010d79438214156140695f3983b7949d6b" exitCode=0 Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.627341 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" event={"ID":"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef","Type":"ContainerDied","Data":"0fd75ae841c6a88a4bee4af8191615010d79438214156140695f3983b7949d6b"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.629128 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" event={"ID":"7adef629-f94a-4968-bd8a-89b01e4f9859","Type":"ContainerStarted","Data":"a9c2fb3b060cb9430c529f64bf490e9f695fbe887b7d312dbc948e9fe1c098f2"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.642830 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" event={"ID":"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311","Type":"ContainerStarted","Data":"cc27ac45da5a782cb6a2d32cc944ebe59444bfd445d2449ecaea1b7988bd98e5"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.642879 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" event={"ID":"b0dd035e-5b22-4ac8-8cfd-a08d50e5e311","Type":"ContainerStarted","Data":"dda92e6fcd5ddd87e2e233a890657bba1d95fd0cb86ccd989c9dd547cce0710b"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.675875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" event={"ID":"651f3b4e-974b-4637-a09c-2e7b95407c8f","Type":"ContainerStarted","Data":"d3ff3595e680edcd981025cff7d75124db83b6f31080ae463f791f136bfd7041"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.675943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" event={"ID":"651f3b4e-974b-4637-a09c-2e7b95407c8f","Type":"ContainerStarted","Data":"4d40670cfc85ce825faf20f2039345d60c0d194d2c0d8d8601e8e3eb220a48e5"} Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.708525 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.709042 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.209008906 +0000 UTC m=+143.504446464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.709345 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.710291 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.210270469 +0000 UTC m=+143.505708027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.810610 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4pgj" podStartSLOduration=122.810555772 podStartE2EDuration="2m2.810555772s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:05.772377314 +0000 UTC m=+143.067814872" watchObservedRunningTime="2025-09-30 17:18:05.810555772 +0000 UTC m=+143.105993330" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.810898 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.813196 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.313171611 +0000 UTC m=+143.608609169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.814367 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.864196 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.913027 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:05 crc kubenswrapper[4688]: E0930 17:18:05.913457 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.413441883 +0000 UTC m=+143.708879441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.916579 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" podStartSLOduration=122.916540894 podStartE2EDuration="2m2.916540894s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:05.86591725 +0000 UTC m=+143.161354818" watchObservedRunningTime="2025-09-30 17:18:05.916540894 +0000 UTC m=+143.211978452" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.966997 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bfz2g" Sep 30 17:18:05 crc kubenswrapper[4688]: I0930 17:18:05.998439 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9xbs"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.001967 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tfh56" podStartSLOduration=123.001938878 podStartE2EDuration="2m3.001938878s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:05.976211635 +0000 UTC m=+143.271649203" watchObservedRunningTime="2025-09-30 17:18:06.001938878 +0000 UTC m=+143.297376446" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.036005 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.043938 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.052191 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.55214874 +0000 UTC m=+143.847586298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.072614 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.090719 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.104118 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.129675 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.159712 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.160081 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.660064883 +0000 UTC m=+143.955502441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.260720 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.261605 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.761564908 +0000 UTC m=+144.057002466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: W0930 17:18:06.276036 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c3ebb64_9996_4f07_a582_bebc5edeae3e.slice/crio-5ebaf2d4457ca5ad43245bf220cca356a155d460e25ee63b0ab7a9e9eb631a5b WatchSource:0}: Error finding container 5ebaf2d4457ca5ad43245bf220cca356a155d460e25ee63b0ab7a9e9eb631a5b: Status 404 returned error can't find the container with id 5ebaf2d4457ca5ad43245bf220cca356a155d460e25ee63b0ab7a9e9eb631a5b Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.362528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.362908 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.862893948 +0000 UTC m=+144.158331506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.409327 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" podStartSLOduration=123.409290132 podStartE2EDuration="2m3.409290132s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:06.347945987 +0000 UTC m=+143.643383555" watchObservedRunningTime="2025-09-30 17:18:06.409290132 +0000 UTC m=+143.704727690" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.423492 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.440168 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.465302 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.465713 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:06.965685697 +0000 UTC m=+144.261123265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.492857 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dr6nv" podStartSLOduration=123.492834137 podStartE2EDuration="2m3.492834137s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:06.483115163 +0000 UTC m=+143.778552721" watchObservedRunningTime="2025-09-30 17:18:06.492834137 +0000 UTC m=+143.788271695" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.519936 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k"] Sep 30 17:18:06 crc kubenswrapper[4688]: W0930 17:18:06.521547 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71af9722_bf72_4336_9e21_4fe59401c4c8.slice/crio-978ac4fd7b5f80be22bb3592567266a88958123b229678fb8e73aac528f413bf WatchSource:0}: Error finding container 978ac4fd7b5f80be22bb3592567266a88958123b229678fb8e73aac528f413bf: Status 404 returned error can't find the container with id 978ac4fd7b5f80be22bb3592567266a88958123b229678fb8e73aac528f413bf Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.566428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.566930 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.066914975 +0000 UTC m=+144.362352533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.571798 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" podStartSLOduration=123.571776362 podStartE2EDuration="2m3.571776362s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:06.544394416 +0000 UTC m=+143.839831974" watchObservedRunningTime="2025-09-30 17:18:06.571776362 +0000 UTC m=+143.867213920" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.654243 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" podStartSLOduration=123.654224348 podStartE2EDuration="2m3.654224348s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:06.612501167 +0000 UTC m=+143.907938725" watchObservedRunningTime="2025-09-30 17:18:06.654224348 +0000 UTC m=+143.949661906" Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.657417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.667400 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.667858 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.167833774 +0000 UTC m=+144.463271332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.677716 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.680282 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.180259069 +0000 UTC m=+144.475696637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.691278 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-6qrtn"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.752676 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.771906 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-65pp7"] Sep 30 17:18:06 crc kubenswrapper[4688]: W0930 17:18:06.786964 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ba9d3e1_100f_4a41_8b6d_9e1b366ebda9.slice/crio-a378f44340f6e76565bae97d0843e65c729240eea5ffb8562276f41856493016 WatchSource:0}: Error finding container a378f44340f6e76565bae97d0843e65c729240eea5ffb8562276f41856493016: Status 404 returned error can't find the container with id a378f44340f6e76565bae97d0843e65c729240eea5ffb8562276f41856493016 Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.786981 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" event={"ID":"1c3ebb64-9996-4f07-a582-bebc5edeae3e","Type":"ContainerStarted","Data":"5ebaf2d4457ca5ad43245bf220cca356a155d460e25ee63b0ab7a9e9eb631a5b"} Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.787691 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:06 crc kubenswrapper[4688]: E0930 17:18:06.788053 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.288032308 +0000 UTC m=+144.583469866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.789942 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.792859 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" event={"ID":"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94","Type":"ContainerStarted","Data":"130c80b71f5d5051d381a0433ee1ff5844bc6951a6c21d43e065dbcfc685cf3b"} Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.811903 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" event={"ID":"c9556dbe-5bce-417e-84f1-1e7ed221d383","Type":"ContainerStarted","Data":"b1b9dc4a53a39edb6390e80062caa4ef9a5ac9beeafe60602a23a8a823579f7c"} Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.817836 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6ft79"] Sep 30 17:18:06 crc kubenswrapper[4688]: I0930 17:18:06.901965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:06.902774 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.402751528 +0000 UTC m=+144.698189086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.923089 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-45pf9"] Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.928424 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-brznm"] Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.933165 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9nd28" event={"ID":"dc3db99d-3df2-4b2c-8303-8366c0165cd6","Type":"ContainerStarted","Data":"dfb1af262f7ddc835ea3794b9079b45d8e1302f477453dca50f13fec268218f1"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.939545 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" event={"ID":"fc255fcb-01db-4aed-b6d4-89f4c5555dd6","Type":"ContainerStarted","Data":"7b5e30d743c7d4afa4b990259206b899d67f2989469cc9f7de983ee13de9a541"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.943628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" event={"ID":"7adef629-f94a-4968-bd8a-89b01e4f9859","Type":"ContainerStarted","Data":"de654bbe878d0c3be9561edade4c1bd5f385be72d073b4332b0a011d54c569d1"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.946426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" event={"ID":"6f650a2d-90e1-465d-bc6b-1451aa0099ac","Type":"ContainerStarted","Data":"5fe130283a3562c75487a95b4126497fd9548073de8ac7d90b568a5c03d87c9a"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.963873 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lwmxt"] Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.984879 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bhb8d" event={"ID":"22bdca4c-6929-427a-93bd-6b39d5b1376d","Type":"ContainerStarted","Data":"e25f2998c60ba0b754f20603c1bff45777f98f7aa763d8ed06126b843d19061f"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.984933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bhb8d" event={"ID":"22bdca4c-6929-427a-93bd-6b39d5b1376d","Type":"ContainerStarted","Data":"dcbbac67c57dfc9fc48dd0ed5637320a2067d48c17cdeb7f55690eabe48afd62"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.988773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" event={"ID":"52fbf822-f7a7-4048-aef9-493aa1989f48","Type":"ContainerStarted","Data":"bfdce0a9ed7f3a91ce16371f6516195a23e5edfbeb6d0d8d8db0c43d05fab9d9"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:06.995827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" event={"ID":"541f325d-4d87-4e4f-93c6-11a1d8406c3e","Type":"ContainerStarted","Data":"a9dd2c4f99a250591beaa66f1bb76b8077a75c5c952db802fb7e3e5165c7bd97"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.004427 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.005649 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.505628099 +0000 UTC m=+144.801065657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.030963 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" event={"ID":"213ea898-4f81-49ad-afaa-000178b8d1f4","Type":"ContainerStarted","Data":"6b9245a9355fecba43713660e09ea5684b48864b329f0358ecb861ecf9fba731"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.046621 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.055983 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.056046 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.062403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" event={"ID":"e5286b99-2b69-4447-9137-a65e72ed5d3a","Type":"ContainerStarted","Data":"bd09c46e4954e8afa6f95f9815d5bef8636abfdc767252dcde6ad2c5ed6dbe5c"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.086387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wl7nh" event={"ID":"19515a5a-5268-41e3-8288-0e41a71c9dc4","Type":"ContainerStarted","Data":"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.086442 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wl7nh" event={"ID":"19515a5a-5268-41e3-8288-0e41a71c9dc4","Type":"ContainerStarted","Data":"a558776cc38d7f2d0c10a9c75995c382c093338ef73c1895e8e65ffd30557300"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.116944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" event={"ID":"a171a3b9-c317-4aa7-84c8-f201dbaa653d","Type":"ContainerStarted","Data":"56f731009baccebcabc8e4870c7a8475a1b0cad95efe174a530e0f2a700bc0f5"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.133325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.148729 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.648706092 +0000 UTC m=+144.944143650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.219361 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dcxhn" podStartSLOduration=124.219344939 podStartE2EDuration="2m4.219344939s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.216478084 +0000 UTC m=+144.511915662" watchObservedRunningTime="2025-09-30 17:18:07.219344939 +0000 UTC m=+144.514782497" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.224549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" event={"ID":"5b3986b0-0f8b-4404-aefd-f8a0510e1150","Type":"ContainerStarted","Data":"c918f2d11794afa1ae03fd20ff9f6a8478ec3d4100be01f719b27e20ad41334c"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.225365 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.237247 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.237698 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t6z28" event={"ID":"24dbec1e-8bd6-471c-bda5-ac5df4a771a1","Type":"ContainerStarted","Data":"289a59f5f11f1ba696979f2286a7a155986ee872b2db4cd99c1bf16757a21835"} Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.237902 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.737876694 +0000 UTC m=+145.033314252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.237996 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-68dks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.238042 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" podUID="5b3986b0-0f8b-4404-aefd-f8a0510e1150" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.238557 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.241764 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-t6z28 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.241813 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t6z28" podUID="24dbec1e-8bd6-471c-bda5-ac5df4a771a1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.241923 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" event={"ID":"bf1537ab-4a36-4cae-aa5c-5035d1705936","Type":"ContainerStarted","Data":"f3adbc73e1f98c6275be2e2d5834c119723e20f2d70a2bdd2dfd18a8b06b118a"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.246833 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" event={"ID":"71af9722-bf72-4336-9e21-4fe59401c4c8","Type":"ContainerStarted","Data":"978ac4fd7b5f80be22bb3592567266a88958123b229678fb8e73aac528f413bf"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.254474 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" event={"ID":"25f3ac70-1e22-4b7c-ba7d-2e952da0c248","Type":"ContainerStarted","Data":"bacaef5fe708e54b4624039fd07085e554aee2dc6c28a4a1b7a2ded743f4a924"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.254529 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" event={"ID":"25f3ac70-1e22-4b7c-ba7d-2e952da0c248","Type":"ContainerStarted","Data":"0e70060d199cdb7c28e851c4802b38453a4b910c407adca1ffa8019c9da6e4b1"} Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.339477 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.340376 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.840352154 +0000 UTC m=+145.135789712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.352114 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cfd84" podStartSLOduration=124.352090071 podStartE2EDuration="2m4.352090071s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.34934391 +0000 UTC m=+144.644781468" watchObservedRunningTime="2025-09-30 17:18:07.352090071 +0000 UTC m=+144.647527629" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.441025 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.443756 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:07.943732098 +0000 UTC m=+145.239169656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.471382 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dbtg8" podStartSLOduration=124.471357141 podStartE2EDuration="2m4.471357141s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.468752953 +0000 UTC m=+144.764190511" watchObservedRunningTime="2025-09-30 17:18:07.471357141 +0000 UTC m=+144.766794699" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.495167 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bhb8d" podStartSLOduration=124.495137233 podStartE2EDuration="2m4.495137233s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.42048433 +0000 UTC m=+144.715921898" watchObservedRunningTime="2025-09-30 17:18:07.495137233 +0000 UTC m=+144.790574791" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.530805 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" podStartSLOduration=124.530784595 podStartE2EDuration="2m4.530784595s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.530220381 +0000 UTC m=+144.825657929" watchObservedRunningTime="2025-09-30 17:18:07.530784595 +0000 UTC m=+144.826222153" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.546552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.547217 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.047202685 +0000 UTC m=+145.342640243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.604199 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.613552 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wl7nh" podStartSLOduration=124.61352238 podStartE2EDuration="2m4.61352238s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.604832312 +0000 UTC m=+144.900269880" watchObservedRunningTime="2025-09-30 17:18:07.61352238 +0000 UTC m=+144.908959938" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.638072 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vspn6" podStartSLOduration=124.6380192 podStartE2EDuration="2m4.6380192s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.633710768 +0000 UTC m=+144.929148336" watchObservedRunningTime="2025-09-30 17:18:07.6380192 +0000 UTC m=+144.933456758" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.647273 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.648383 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.148360231 +0000 UTC m=+145.443797789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.664368 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fhqnk" podStartSLOduration=124.664344869 podStartE2EDuration="2m4.664344869s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.663757504 +0000 UTC m=+144.959195062" watchObservedRunningTime="2025-09-30 17:18:07.664344869 +0000 UTC m=+144.959782427" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.688203 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rz9sh" podStartSLOduration=124.688180922 podStartE2EDuration="2m4.688180922s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.685185314 +0000 UTC m=+144.980622882" watchObservedRunningTime="2025-09-30 17:18:07.688180922 +0000 UTC m=+144.983618480" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.753808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.754402 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.254372684 +0000 UTC m=+145.549810242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.778548 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9nd28" podStartSLOduration=6.778524375 podStartE2EDuration="6.778524375s" podCreationTimestamp="2025-09-30 17:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.777315424 +0000 UTC m=+145.072752982" watchObservedRunningTime="2025-09-30 17:18:07.778524375 +0000 UTC m=+145.073961933" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.823756 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6jdt9" podStartSLOduration=124.823736798 podStartE2EDuration="2m4.823736798s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.823103291 +0000 UTC m=+145.118540849" watchObservedRunningTime="2025-09-30 17:18:07.823736798 +0000 UTC m=+145.119174356" Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.854910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.855407 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.355388726 +0000 UTC m=+145.650826284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.956112 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:07 crc kubenswrapper[4688]: E0930 17:18:07.956489 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.4564723 +0000 UTC m=+145.751909848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:07 crc kubenswrapper[4688]: I0930 17:18:07.993644 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t6z28" podStartSLOduration=124.993622172 podStartE2EDuration="2m4.993622172s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:07.992937044 +0000 UTC m=+145.288374612" watchObservedRunningTime="2025-09-30 17:18:07.993622172 +0000 UTC m=+145.289059730" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.057275 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.057767 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.557744549 +0000 UTC m=+145.853182107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.058528 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:08 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:08 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:08 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.058615 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.140983 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" podStartSLOduration=125.140960105 podStartE2EDuration="2m5.140960105s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.129086205 +0000 UTC m=+145.424523763" watchObservedRunningTime="2025-09-30 17:18:08.140960105 +0000 UTC m=+145.436397663" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.162429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.162926 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.662904569 +0000 UTC m=+145.958342117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.219611 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" podStartSLOduration=125.219585142 podStartE2EDuration="2m5.219585142s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.219333305 +0000 UTC m=+145.514770883" watchObservedRunningTime="2025-09-30 17:18:08.219585142 +0000 UTC m=+145.515022700" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.264610 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.265230 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.765208095 +0000 UTC m=+146.060645653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.290852 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" event={"ID":"e76da010-cdc9-41ad-82a5-19ecb9a7e278","Type":"ContainerStarted","Data":"15f51b2d357bfd098dd15ee751b17052bad8625990c16193367a1c05a32635d8"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.310501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9nd28" event={"ID":"dc3db99d-3df2-4b2c-8303-8366c0165cd6","Type":"ContainerStarted","Data":"513ccdb335a368c0c7ab4edb94a3fabeb0cde7559ee5af23bd889a004abcf804"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.327036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" event={"ID":"bf1537ab-4a36-4cae-aa5c-5035d1705936","Type":"ContainerStarted","Data":"3a18192b3b71751799f0e9a888503a11e9f281465a62e0a91e5dfaa536179af6"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.358243 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" event={"ID":"e5286b99-2b69-4447-9137-a65e72ed5d3a","Type":"ContainerStarted","Data":"30a1efd499e0798641b17e51de7d34c15bc7b6d33ad1507415e2c8fcb47bde88"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.370592 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.389435 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.889409194 +0000 UTC m=+146.184846752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.400037 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" event={"ID":"5b3986b0-0f8b-4404-aefd-f8a0510e1150","Type":"ContainerStarted","Data":"07d4142996e9e3e44f02673adbcf46b418975cbfe06b8c42ac9748e7ea718d17"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.400897 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-68dks container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.400965 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" podUID="5b3986b0-0f8b-4404-aefd-f8a0510e1150" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.441374 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" event={"ID":"2d4b9d3a-9096-4b80-91fa-778ea8d5599a","Type":"ContainerStarted","Data":"683d224b3e08462f5f8d9156836532658d2bb0f7a0efc9a8a745edf52cd9dbd2"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.480771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" event={"ID":"1c3ebb64-9996-4f07-a582-bebc5edeae3e","Type":"ContainerStarted","Data":"7293d5f7838bce2a885f9ca0c01d1f1bffb69d53eb1dc732d8eab76b0fd7c408"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.482260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.483834 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:08.983803623 +0000 UTC m=+146.279241181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.516331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" event={"ID":"a171a3b9-c317-4aa7-84c8-f201dbaa653d","Type":"ContainerStarted","Data":"d2da5fb020cdf094a438204b12d743f9c0766657ff6f7359abb619f324a5fdbe"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.518747 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.533721 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" event={"ID":"53f98e9c-2118-4997-b893-dd6744089ce9","Type":"ContainerStarted","Data":"c1e72462dab4f8e63d6b97b7c6f4797442bd80a47fcecc31fd633a24233f5446"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.533792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" event={"ID":"53f98e9c-2118-4997-b893-dd6744089ce9","Type":"ContainerStarted","Data":"74d437d5e25aac47be7740ac08dfefde1e63f2c78b323181fffd81e3f92d16f7"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.544553 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-smnqd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.544636 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.549162 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" event={"ID":"d7a53520-b95e-4e7f-b4d0-c60c3a5a35ef","Type":"ContainerStarted","Data":"dbbd6a2201fe6433508dea1afa844e5a03e859cf8be354c183a8393cce0ffc9f"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.556604 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" event={"ID":"645a26b3-241a-4738-8654-689e2dd6b93a","Type":"ContainerStarted","Data":"39da08b3ac5fda3ca24e51344e8f7f070dbf9a5bd9cc02e83224d97d6b734515"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.575493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" event={"ID":"012d15a1-9240-4170-be9c-5578b1036190","Type":"ContainerStarted","Data":"254c9327c025b925d104174e4f94fc0547f705648da43f3dff39a724da8bf754"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.585800 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lwmxt" event={"ID":"b13f5778-4f7d-4528-baf7-cfbbb712ea77","Type":"ContainerStarted","Data":"a9368a8718e8f55bb817cbf7bb7b2dd563cad4fcac3e64618556860b745ae609"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.586126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.588524 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.086519639 +0000 UTC m=+146.381957187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.595769 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" event={"ID":"52fbf822-f7a7-4048-aef9-493aa1989f48","Type":"ContainerStarted","Data":"76c86ae33013e8529c99e9b89ee686c0bcd31d74e834afbc5107e4c2ffed6865"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.639188 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" event={"ID":"fc255fcb-01db-4aed-b6d4-89f4c5555dd6","Type":"ContainerStarted","Data":"34a3d773985ff6ccecdb5deae8f54eb8960b3240f96610c7c74b969b752d29b7"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.639257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" event={"ID":"fc255fcb-01db-4aed-b6d4-89f4c5555dd6","Type":"ContainerStarted","Data":"fb4822015fcbeb342a62155295fe99f85779be1277da5032e421b7c51484e926"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.640105 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.659549 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" podStartSLOduration=125.659529649 podStartE2EDuration="2m5.659529649s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.657409013 +0000 UTC m=+145.952846591" watchObservedRunningTime="2025-09-30 17:18:08.659529649 +0000 UTC m=+145.954967217" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.676374 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" event={"ID":"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9","Type":"ContainerStarted","Data":"9962a907d76e3d51190eb08501a5e456a168fe81a4b7100e6fdd404351173d38"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.676430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" event={"ID":"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9","Type":"ContainerStarted","Data":"a378f44340f6e76565bae97d0843e65c729240eea5ffb8562276f41856493016"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.689099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.690911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" event={"ID":"9b3787c4-3ac4-4a95-b946-6fe5e8d91b94","Type":"ContainerStarted","Data":"55911c826503fde13725752c9d8c44816efb337f28ae5bc31a03f097b117cd16"} Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.691637 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.191610748 +0000 UTC m=+146.487048306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.718438 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" event={"ID":"fc656260-bc8c-4a9d-b6cb-ddea5e161058","Type":"ContainerStarted","Data":"4db274500e7adb20ed4f95d4cff687ac013f824e719b24363ae2e3f4f7160755"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.737231 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-brznm" event={"ID":"874fcfcf-2899-4b9d-864b-c9a35a52f214","Type":"ContainerStarted","Data":"e48e7a570e6dc630df8883af2e2d4914906a9f71e16c5716a5dd05105d76e428"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.750053 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" podStartSLOduration=125.750024946 podStartE2EDuration="2m5.750024946s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.705902412 +0000 UTC m=+146.001339970" watchObservedRunningTime="2025-09-30 17:18:08.750024946 +0000 UTC m=+146.045462504" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.788711 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phl9k" podStartSLOduration=125.788688807 podStartE2EDuration="2m5.788688807s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.751264518 +0000 UTC m=+146.046702076" watchObservedRunningTime="2025-09-30 17:18:08.788688807 +0000 UTC m=+146.084126365" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.788837 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" podStartSLOduration=125.788833531 podStartE2EDuration="2m5.788833531s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.785427432 +0000 UTC m=+146.080865000" watchObservedRunningTime="2025-09-30 17:18:08.788833531 +0000 UTC m=+146.084271089" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.791748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.792700 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.292676121 +0000 UTC m=+146.588113679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.799430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bl66m" event={"ID":"541f325d-4d87-4e4f-93c6-11a1d8406c3e","Type":"ContainerStarted","Data":"bea20643c8a68f601d15e0efd93c732c3a6af6e41eccf5e0f8ef9786494eae5f"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.805283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" event={"ID":"71af9722-bf72-4336-9e21-4fe59401c4c8","Type":"ContainerStarted","Data":"c0b20c65fc087d3f1227593d845e6227fa69aee81f70571acfae1c426f6619b2"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.808544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" event={"ID":"c9556dbe-5bce-417e-84f1-1e7ed221d383","Type":"ContainerStarted","Data":"933cea3fa76edee8795ff7ac5ad2ed39670c4acfb5e0421d7a4d0a64aab36f77"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.819549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65pp7" event={"ID":"5ddf43f5-3af1-4de7-b474-37dc49c3ab48","Type":"ContainerStarted","Data":"125827d1c8001b1ae6d47e864cdbd307ef58c46b2105509182a0f8d609cf5b41"} Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.820177 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-t6z28 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.820226 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t6z28" podUID="24dbec1e-8bd6-471c-bda5-ac5df4a771a1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.820456 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.843981 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mjln6" podStartSLOduration=125.843954893 podStartE2EDuration="2m5.843954893s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.839306251 +0000 UTC m=+146.134743809" watchObservedRunningTime="2025-09-30 17:18:08.843954893 +0000 UTC m=+146.139392451" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.883509 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" podStartSLOduration=125.883487997 podStartE2EDuration="2m5.883487997s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.882300576 +0000 UTC m=+146.177738134" watchObservedRunningTime="2025-09-30 17:18:08.883487997 +0000 UTC m=+146.178925555" Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.900895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.901047 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ttss4" Sep 30 17:18:08 crc kubenswrapper[4688]: E0930 17:18:08.901349 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.401324503 +0000 UTC m=+146.696762061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:08 crc kubenswrapper[4688]: I0930 17:18:08.918548 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-422bt" podStartSLOduration=125.918523973 podStartE2EDuration="2m5.918523973s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:08.916819458 +0000 UTC m=+146.212257016" watchObservedRunningTime="2025-09-30 17:18:08.918523973 +0000 UTC m=+146.213961531" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.003588 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.008293 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.508273201 +0000 UTC m=+146.803710759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.057857 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:09 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:09 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:09 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.057939 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.115148 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.116159 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.616132482 +0000 UTC m=+146.911570040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.218058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.218512 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.718494359 +0000 UTC m=+147.013931917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.319191 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.319790 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.819757018 +0000 UTC m=+147.115194566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.421059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.421497 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:09.921475808 +0000 UTC m=+147.216913366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.523114 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.523349 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.023302671 +0000 UTC m=+147.318740229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.523477 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.523994 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.023984259 +0000 UTC m=+147.319421817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.624822 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.625053 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.125013001 +0000 UTC m=+147.420450559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.625508 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.625909 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.125892934 +0000 UTC m=+147.421330482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.726884 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.727113 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.227074 +0000 UTC m=+147.522511558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.727504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.727913 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.227896922 +0000 UTC m=+147.523334480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.828944 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.829217 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.329174311 +0000 UTC m=+147.624611869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.829436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.829914 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.329883129 +0000 UTC m=+147.625320847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.842317 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" event={"ID":"2d4b9d3a-9096-4b80-91fa-778ea8d5599a","Type":"ContainerStarted","Data":"42e167e3111e5114df03bb86c7b8eaef3bf5f4d1ac12fbc5dc62f6805ac75420"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.844771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" event={"ID":"fc656260-bc8c-4a9d-b6cb-ddea5e161058","Type":"ContainerStarted","Data":"9a03e9f4c877033aab2e929d6e2279acaaafad249e23e94fbd824c62e2c70222"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.848632 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" event={"ID":"1c3ebb64-9996-4f07-a582-bebc5edeae3e","Type":"ContainerStarted","Data":"e4eccd1099e3c67e93356461f9946070592e2b72ebd07abf46030e7c9d42c2d3"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.850998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" event={"ID":"53f98e9c-2118-4997-b893-dd6744089ce9","Type":"ContainerStarted","Data":"dc7e79d7b4bf70e7e6fe92abbca5f280c49ff3949b2b93450924b245db937366"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.852993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" event={"ID":"012d15a1-9240-4170-be9c-5578b1036190","Type":"ContainerStarted","Data":"6185e07cfd94015230f33961cf79acc48a99c4f72131d0473dbd4740edd05326"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.853461 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.854777 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" event={"ID":"e5286b99-2b69-4447-9137-a65e72ed5d3a","Type":"ContainerStarted","Data":"c5251b4724962d1291a7f89ffd0daa00422a78ade79b5516b27e1e2bd4c2b216"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.855660 4688 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5qlst container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.855699 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" podUID="012d15a1-9240-4170-be9c-5578b1036190" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.856558 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lwmxt" event={"ID":"b13f5778-4f7d-4528-baf7-cfbbb712ea77","Type":"ContainerStarted","Data":"4eb44c7f4bb5833348cb4e866d5879d9edcd95972ed64110295f8aecd8adb39e"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.858825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" event={"ID":"e76da010-cdc9-41ad-82a5-19ecb9a7e278","Type":"ContainerStarted","Data":"39d5f5ac0d09a06f3926cab4cc2bacf17e13cddb5c8725280e73f56537e7f977"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.858853 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" event={"ID":"e76da010-cdc9-41ad-82a5-19ecb9a7e278","Type":"ContainerStarted","Data":"2cabf346750b56b54681d82470e0c3377f004f31cb911e107a3116f476ccd10a"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.861354 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" event={"ID":"2ba9d3e1-100f-4a41-8b6d-9e1b366ebda9","Type":"ContainerStarted","Data":"f3c98e396219dcb89b9e7420db797ae491f10968a3372492bacbb6efac399319"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.863528 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65pp7" event={"ID":"5ddf43f5-3af1-4de7-b474-37dc49c3ab48","Type":"ContainerStarted","Data":"07d42bed022adc0261ec0cde9a55828bfdc501be32667f3535493df136c9f6d5"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.863559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-65pp7" event={"ID":"5ddf43f5-3af1-4de7-b474-37dc49c3ab48","Type":"ContainerStarted","Data":"e302655843debeed037fe54262d57336e003d1d2f144152e91d9d47abc0396d1"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.863925 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.865984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" event={"ID":"645a26b3-241a-4738-8654-689e2dd6b93a","Type":"ContainerStarted","Data":"e4e6b530f4a7784847d41653fc529540a0ce8f2ce553d371e55143428b8b32a1"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.869410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" event={"ID":"71af9722-bf72-4336-9e21-4fe59401c4c8","Type":"ContainerStarted","Data":"b64e265b777c6d337bc07e20c8d28eac78c6bcb84f04ca0d18dd8f05cd928b1e"} Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.870081 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-t6z28 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.870197 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t6z28" podUID="24dbec1e-8bd6-471c-bda5-ac5df4a771a1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.870240 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-smnqd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.870291 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.877404 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" podStartSLOduration=126.877380852 podStartE2EDuration="2m6.877380852s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.875164164 +0000 UTC m=+147.170601722" watchObservedRunningTime="2025-09-30 17:18:09.877380852 +0000 UTC m=+147.172818410" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.894172 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fbxbf" podStartSLOduration=126.89415249 podStartE2EDuration="2m6.89415249s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.892184809 +0000 UTC m=+147.187622367" watchObservedRunningTime="2025-09-30 17:18:09.89415249 +0000 UTC m=+147.189590048" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.915201 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" podStartSLOduration=126.91518439 podStartE2EDuration="2m6.91518439s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.912021378 +0000 UTC m=+147.207458946" watchObservedRunningTime="2025-09-30 17:18:09.91518439 +0000 UTC m=+147.210621948" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.931337 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.931609 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.431554899 +0000 UTC m=+147.726992457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.932472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:09 crc kubenswrapper[4688]: E0930 17:18:09.934609 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.434563117 +0000 UTC m=+147.730000685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.943249 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9xbs" podStartSLOduration=126.943216754 podStartE2EDuration="2m6.943216754s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.93962798 +0000 UTC m=+147.235065538" watchObservedRunningTime="2025-09-30 17:18:09.943216754 +0000 UTC m=+147.238654312" Sep 30 17:18:09 crc kubenswrapper[4688]: I0930 17:18:09.976270 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-45pf9" podStartSLOduration=126.976242467 podStartE2EDuration="2m6.976242467s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.973526376 +0000 UTC m=+147.268963934" watchObservedRunningTime="2025-09-30 17:18:09.976242467 +0000 UTC m=+147.271680025" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.000456 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-65pp7" podStartSLOduration=9.00043154 podStartE2EDuration="9.00043154s" podCreationTimestamp="2025-09-30 17:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:09.99700695 +0000 UTC m=+147.292444508" watchObservedRunningTime="2025-09-30 17:18:10.00043154 +0000 UTC m=+147.295869108" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.030046 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-68dks" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.032976 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-6qrtn" podStartSLOduration=127.032949151 podStartE2EDuration="2m7.032949151s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.028655688 +0000 UTC m=+147.324093246" watchObservedRunningTime="2025-09-30 17:18:10.032949151 +0000 UTC m=+147.328386699" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.034015 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.034186 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.534155032 +0000 UTC m=+147.829592590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.034347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.034756 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.534748388 +0000 UTC m=+147.830185946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.051215 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:10 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:10 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:10 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.051295 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.067311 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nz5qz" podStartSLOduration=127.067284569 podStartE2EDuration="2m7.067284569s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.061047285 +0000 UTC m=+147.356484843" watchObservedRunningTime="2025-09-30 17:18:10.067284569 +0000 UTC m=+147.362722127" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.088828 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6ft79" podStartSLOduration=127.088808252 podStartE2EDuration="2m7.088808252s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.086345037 +0000 UTC m=+147.381782595" watchObservedRunningTime="2025-09-30 17:18:10.088808252 +0000 UTC m=+147.384245810" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.131400 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lwmxt" podStartSLOduration=8.131373495 podStartE2EDuration="8.131373495s" podCreationTimestamp="2025-09-30 17:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.13005364 +0000 UTC m=+147.425491198" watchObservedRunningTime="2025-09-30 17:18:10.131373495 +0000 UTC m=+147.426811053" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.138255 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.138463 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.638425969 +0000 UTC m=+147.933863517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.138521 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.138920 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.638912392 +0000 UTC m=+147.934349950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.149402 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pgbkf" podStartSLOduration=127.14937947600001 podStartE2EDuration="2m7.149379476s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.148036971 +0000 UTC m=+147.443474529" watchObservedRunningTime="2025-09-30 17:18:10.149379476 +0000 UTC m=+147.444817024" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.179842 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6v2mg" podStartSLOduration=127.179821072 podStartE2EDuration="2m7.179821072s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:10.178753424 +0000 UTC m=+147.474190982" watchObservedRunningTime="2025-09-30 17:18:10.179821072 +0000 UTC m=+147.475258630" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.241295 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.241840 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.741800743 +0000 UTC m=+148.037238301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.342700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.343175 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.843159674 +0000 UTC m=+148.138597232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.444067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.444337 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.94429724 +0000 UTC m=+148.239734818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.444795 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.445295 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:10.945281975 +0000 UTC m=+148.240719553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.521583 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.546438 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.546766 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.046714088 +0000 UTC m=+148.342151646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.547059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.547517 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.047494989 +0000 UTC m=+148.342932547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.648222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.648449 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.148400438 +0000 UTC m=+148.443838006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.648615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.649135 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.149125237 +0000 UTC m=+148.444562805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.749551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.749763 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.249732039 +0000 UTC m=+148.545169597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.749963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.750394 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.250384256 +0000 UTC m=+148.545821824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.851479 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.851743 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.351716536 +0000 UTC m=+148.647154094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.851996 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.852548 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.352520197 +0000 UTC m=+148.647957765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.877795 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-brznm" event={"ID":"874fcfcf-2899-4b9d-864b-c9a35a52f214","Type":"ContainerStarted","Data":"91b7897453d03ce7dd1909e286f50572c0a1e6b24b6d3680a8769a6cfcc1b6bc"} Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.878733 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-smnqd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.878801 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.878953 4688 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5qlst container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.879019 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" podUID="012d15a1-9240-4170-be9c-5578b1036190" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.953087 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.953359 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.453309173 +0000 UTC m=+148.748746741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:10 crc kubenswrapper[4688]: I0930 17:18:10.953940 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:10 crc kubenswrapper[4688]: E0930 17:18:10.955224 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.455207343 +0000 UTC m=+148.750644901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.051262 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:11 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:11 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:11 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.051331 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.055985 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.056220 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.556186424 +0000 UTC m=+148.851623982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.056363 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.056779 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.556771629 +0000 UTC m=+148.852209177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.157740 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.157970 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.657933475 +0000 UTC m=+148.953371033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.158101 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.158497 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.65848886 +0000 UTC m=+148.953926418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259107 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.259370 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.759335428 +0000 UTC m=+149.054772986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259530 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259612 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.259637 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.260051 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.760026756 +0000 UTC m=+149.055464314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.264664 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.267581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.267616 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.268292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.359878 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.362527 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.362785 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.862751662 +0000 UTC m=+149.158189220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.362852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.363266 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.863247155 +0000 UTC m=+149.158684713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.389206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.464734 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.464975 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.964938505 +0000 UTC m=+149.260376063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.465065 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.465519 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:11.9655071 +0000 UTC m=+149.260944658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.543260 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.567462 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.567932 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.067916329 +0000 UTC m=+149.363353877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.668844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.669769 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.169751282 +0000 UTC m=+149.465188840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.770482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.770766 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.270729854 +0000 UTC m=+149.566167412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.770863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.771270 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.271259387 +0000 UTC m=+149.566696945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.872286 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.872735 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.372713361 +0000 UTC m=+149.668150919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.883863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d264f5d79b49ff689059f70243c700927be437e5b9a34b99ee59ef907b392d20"} Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.922968 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5qlst" Sep 30 17:18:11 crc kubenswrapper[4688]: W0930 17:18:11.971630 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ed7fb168911dcf760d49d1d9d794ee00cf1d03b9da222411ec4b301e291d82d3 WatchSource:0}: Error finding container ed7fb168911dcf760d49d1d9d794ee00cf1d03b9da222411ec4b301e291d82d3: Status 404 returned error can't find the container with id ed7fb168911dcf760d49d1d9d794ee00cf1d03b9da222411ec4b301e291d82d3 Sep 30 17:18:11 crc kubenswrapper[4688]: I0930 17:18:11.974732 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:11 crc kubenswrapper[4688]: E0930 17:18:11.975752 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.475718715 +0000 UTC m=+149.771156283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.059745 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:12 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:12 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:12 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.059816 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.077182 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.078051 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.578027981 +0000 UTC m=+149.873465539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.179326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.179857 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.679831504 +0000 UTC m=+149.975269062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.281221 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.281776 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.7817466 +0000 UTC m=+150.077184158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.383180 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.383754 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.883728767 +0000 UTC m=+150.179166495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.484031 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.484290 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.984254886 +0000 UTC m=+150.279692444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.484417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.484846 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:12.984838611 +0000 UTC m=+150.280276169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.585212 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.585514 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.085453513 +0000 UTC m=+150.380891071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.585598 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.586022 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.086011278 +0000 UTC m=+150.381449026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.686938 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.687178 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.187146693 +0000 UTC m=+150.482584251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.687760 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.688431 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.188412906 +0000 UTC m=+150.483850464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.788987 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.789183 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.289153791 +0000 UTC m=+150.584591349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.789386 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.789799 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.289790708 +0000 UTC m=+150.585228266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.890224 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.890391 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.390367738 +0000 UTC m=+150.685805296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.890613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.890989 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.390981634 +0000 UTC m=+150.686419192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.892861 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f386608c7b9efcbc97c0ebbf0551d60ee12cd4b7fed23a17bd19f404ed9c1c85"} Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.892904 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ed7fb168911dcf760d49d1d9d794ee00cf1d03b9da222411ec4b301e291d82d3"} Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.896634 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"52740a2f8a3748c1e1d781f7786aa2d15d6596a721be9945d16d8dbe7ab428fa"} Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.896704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"71ff194259c9415d20ab7fc7fcd52c1010a0010b10c340e25c58120b37fcdac1"} Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.897291 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.901851 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9ee2e1bfd30ded211281c8a94de8fb4c59a3c155de1b6239a0fa32ed3c6230be"} Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.992375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.992679 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.492632293 +0000 UTC m=+150.788069851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:12 crc kubenswrapper[4688]: I0930 17:18:12.992876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:12 crc kubenswrapper[4688]: E0930 17:18:12.993528 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.493484185 +0000 UTC m=+150.788921923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.050480 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:13 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:13 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:13 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.050579 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.094338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.094566 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.594520688 +0000 UTC m=+150.889958246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.095157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.095621 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.595605376 +0000 UTC m=+150.891042934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.200176 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.200420 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.700362965 +0000 UTC m=+150.995800523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.200661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.201088 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.701073474 +0000 UTC m=+150.996511032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.265289 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.266801 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.269133 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.269152 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.275629 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.301982 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.302307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.302361 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.302525 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.802503627 +0000 UTC m=+151.097941185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.331889 4688 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.363961 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.364033 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.375191 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.404434 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.404511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.404772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.404994 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:13.904971457 +0000 UTC m=+151.200409015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.405431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.409807 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.409863 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.437675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.506905 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.507167 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:14.007118709 +0000 UTC m=+151.302556267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.507540 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.509058 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:14.009038339 +0000 UTC m=+151.304475897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.564278 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.565451 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.568737 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.581176 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.585883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.611226 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.611620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.611675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpsq\" (UniqueName: \"kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.611725 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.611886 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 17:18:14.111859888 +0000 UTC m=+151.407297446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.662644 4688 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-09-30T17:18:13.331920696Z","Handler":null,"Name":""} Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.716277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.716806 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.716854 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpsq\" (UniqueName: \"kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.716894 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.720511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.722081 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: E0930 17:18:13.722437 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 17:18:14.22241894 +0000 UTC m=+151.517856498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-85wjs" (UID: "14d12c61-000b-4766-93ff-ae30454582a7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.733593 4688 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vxw96 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]log ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]etcd ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/start-apiserver-admission-initializer ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/generic-apiserver-start-informers ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/max-in-flight-filter ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/storage-object-count-tracker-hook ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/image.openshift.io-apiserver-caches ok Sep 30 17:18:13 crc kubenswrapper[4688]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Sep 30 17:18:13 crc kubenswrapper[4688]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/project.openshift.io-projectcache ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-startinformers ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/openshift.io-restmapperupdater ok Sep 30 17:18:13 crc kubenswrapper[4688]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Sep 30 17:18:13 crc kubenswrapper[4688]: livez check failed Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.733670 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" podUID="2d4b9d3a-9096-4b80-91fa-778ea8d5599a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.774432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpsq\" (UniqueName: \"kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq\") pod \"community-operators-fh8m7\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.782483 4688 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.782517 4688 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.788490 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.789723 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.792778 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.801252 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.822057 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.849128 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.865387 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.882315 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.929221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.929673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.929724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.929748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.932469 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-brznm" event={"ID":"874fcfcf-2899-4b9d-864b-c9a35a52f214","Type":"ContainerStarted","Data":"b0ecb2e5bc7e894157bc555a9dd14223dbf304c37b788ec7fc68d185088c2d16"} Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.932700 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-brznm" event={"ID":"874fcfcf-2899-4b9d-864b-c9a35a52f214","Type":"ContainerStarted","Data":"4d1d69cc037bcb67c7e2ee67b87a7e5fa6070447352d9923c91a28ef56589ee1"} Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.932732 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-brznm" event={"ID":"874fcfcf-2899-4b9d-864b-c9a35a52f214","Type":"ContainerStarted","Data":"f57c453ff1f7507915e4fc9b7a68abf0f323c12e5e406a1b3603687d39894126"} Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.934533 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.934808 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.937171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81652315-c167-4821-a265-3b415a0611e3","Type":"ContainerStarted","Data":"81034525f2240d8854a87ece99f097ec9bd486f9260532e7207ff49ce6ff50f7"} Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.944961 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-f2b5c" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.968030 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-brznm" podStartSLOduration=12.968002324 podStartE2EDuration="12.968002324s" podCreationTimestamp="2025-09-30 17:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:13.963868205 +0000 UTC m=+151.259305783" watchObservedRunningTime="2025-09-30 17:18:13.968002324 +0000 UTC m=+151.263439882" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.971143 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.972707 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.987481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-85wjs\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:13 crc kubenswrapper[4688]: I0930 17:18:13.991545 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.030812 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.030923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.030965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.031003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.031053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmcl\" (UniqueName: \"kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.031114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.033058 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.033298 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.057003 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:14 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:14 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:14 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.057070 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.063361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd\") pod \"certified-operators-ntjl4\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.068977 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.133801 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.133858 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmmcl\" (UniqueName: \"kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.133887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.134949 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.135195 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.138838 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.165860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmmcl\" (UniqueName: \"kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl\") pod \"community-operators-q72fn\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.175677 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.176853 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.191105 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.235899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.236016 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdpx\" (UniqueName: \"kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.236037 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.269125 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.294073 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.337355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqdpx\" (UniqueName: \"kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.337409 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.337443 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.338134 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.338332 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.365030 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.365092 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.367552 4688 patch_prober.go:28] interesting pod/console-f9d7485db-wl7nh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.367606 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wl7nh" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.387503 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-t6z28 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.391025 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t6z28" podUID="24dbec1e-8bd6-471c-bda5-ac5df4a771a1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.388731 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-t6z28 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.391525 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t6z28" podUID="24dbec1e-8bd6-471c-bda5-ac5df4a771a1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.401895 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.403341 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqdpx\" (UniqueName: \"kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx\") pod \"certified-operators-jgnxv\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.491473 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.538056 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:18:14 crc kubenswrapper[4688]: E0930 17:18:14.711510 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8846911_0e30_4c42_9703_08f18f377ee4.slice/crio-7e5c05f555fe0de5b429941e618e02bf39d6bda85472b89b4b551e8fc33d1d9a.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.784112 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.835174 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.912734 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.950871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerStarted","Data":"a635657ec9e0b3c1f08dfc81ffbc964428eae7d88d9949878f8503515971c7aa"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.961410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" event={"ID":"14d12c61-000b-4766-93ff-ae30454582a7","Type":"ContainerStarted","Data":"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.961464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" event={"ID":"14d12c61-000b-4766-93ff-ae30454582a7","Type":"ContainerStarted","Data":"d3de27128303b5d5d9b0e04b19d1fe23d8df501db7a5d0fb5935b7e2410a1b92"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.962411 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.971348 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81652315-c167-4821-a265-3b415a0611e3","Type":"ContainerStarted","Data":"b4210d3e6f4644233b49d477d90a35db7eca600964f24a091c3ef15152abe319"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.974919 4688 generic.go:334] "Generic (PLEG): container finished" podID="bf1537ab-4a36-4cae-aa5c-5035d1705936" containerID="3a18192b3b71751799f0e9a888503a11e9f281465a62e0a91e5dfaa536179af6" exitCode=0 Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.975018 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" event={"ID":"bf1537ab-4a36-4cae-aa5c-5035d1705936","Type":"ContainerDied","Data":"3a18192b3b71751799f0e9a888503a11e9f281465a62e0a91e5dfaa536179af6"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.977057 4688 generic.go:334] "Generic (PLEG): container finished" podID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerID="48204aa2ef67a355e95893512288e73600e0540dc65a475b0ec4c5b350969a98" exitCode=0 Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.977725 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerDied","Data":"48204aa2ef67a355e95893512288e73600e0540dc65a475b0ec4c5b350969a98"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.977755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerStarted","Data":"b66741d23a082c2ae7c826ec890bc22ab9eda0e1846a53a196bb2984dca0d2d2"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.980063 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.982467 4688 generic.go:334] "Generic (PLEG): container finished" podID="a8846911-0e30-4c42-9703-08f18f377ee4" containerID="7e5c05f555fe0de5b429941e618e02bf39d6bda85472b89b4b551e8fc33d1d9a" exitCode=0 Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.982784 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerDied","Data":"7e5c05f555fe0de5b429941e618e02bf39d6bda85472b89b4b551e8fc33d1d9a"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.982837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerStarted","Data":"04213f1e3e4cf6e8079119a60efbf5ba210a2e7e469339429a440261d7f04b81"} Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.987382 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" podStartSLOduration=131.987363916 podStartE2EDuration="2m11.987363916s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:14.985031695 +0000 UTC m=+152.280469253" watchObservedRunningTime="2025-09-30 17:18:14.987363916 +0000 UTC m=+152.282801474" Sep 30 17:18:14 crc kubenswrapper[4688]: I0930 17:18:14.989553 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerStarted","Data":"ebaa390e4f77664199cd23342933cb17757eb474438729ab0db1027e28fa4a73"} Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.037979 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.037959229 podStartE2EDuration="2.037959229s" podCreationTimestamp="2025-09-30 17:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:15.031608313 +0000 UTC m=+152.327045871" watchObservedRunningTime="2025-09-30 17:18:15.037959229 +0000 UTC m=+152.333396787" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.050734 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.069779 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:15 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:15 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:15 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.069867 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.208065 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.209019 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.210618 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.211488 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.218745 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.258286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.258740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.360282 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.360365 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.360472 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.379884 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.446619 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.532657 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.751311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.775750 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.778489 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.781677 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.790934 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.870068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.870129 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j28lt\" (UniqueName: \"kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.870154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.971729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.971815 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j28lt\" (UniqueName: \"kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.971878 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.972726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:15 crc kubenswrapper[4688]: I0930 17:18:15.972968 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.007519 4688 generic.go:334] "Generic (PLEG): container finished" podID="5879067e-2c76-404f-a70f-85af13c713b9" containerID="4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48" exitCode=0 Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.007860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerDied","Data":"4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48"} Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.011590 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j28lt\" (UniqueName: \"kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt\") pod \"redhat-marketplace-w94j6\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.014005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"83c648e6-0ccd-4b20-a724-1608c36fdfaa","Type":"ContainerStarted","Data":"5534f7e9fa3bec9646089369afdf065491d003b1b317f3d0c3ecb0f25ad1a173"} Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.016525 4688 generic.go:334] "Generic (PLEG): container finished" podID="81652315-c167-4821-a265-3b415a0611e3" containerID="b4210d3e6f4644233b49d477d90a35db7eca600964f24a091c3ef15152abe319" exitCode=0 Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.016638 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81652315-c167-4821-a265-3b415a0611e3","Type":"ContainerDied","Data":"b4210d3e6f4644233b49d477d90a35db7eca600964f24a091c3ef15152abe319"} Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.028061 4688 generic.go:334] "Generic (PLEG): container finished" podID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerID="e1565c7bc4f1ccace256dcce2d6f2a07d40b4c8e9f918102b03266b1c801995e" exitCode=0 Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.031410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerDied","Data":"e1565c7bc4f1ccace256dcce2d6f2a07d40b4c8e9f918102b03266b1c801995e"} Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.063421 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:16 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:16 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:16 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.063518 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.105685 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.169863 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.181501 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.181659 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.253985 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.281484 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume\") pod \"bf1537ab-4a36-4cae-aa5c-5035d1705936\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.281558 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhm9q\" (UniqueName: \"kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q\") pod \"bf1537ab-4a36-4cae-aa5c-5035d1705936\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.281620 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume\") pod \"bf1537ab-4a36-4cae-aa5c-5035d1705936\" (UID: \"bf1537ab-4a36-4cae-aa5c-5035d1705936\") " Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.281947 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpcbl\" (UniqueName: \"kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.282017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.282083 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.282671 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf1537ab-4a36-4cae-aa5c-5035d1705936" (UID: "bf1537ab-4a36-4cae-aa5c-5035d1705936"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.288477 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf1537ab-4a36-4cae-aa5c-5035d1705936" (UID: "bf1537ab-4a36-4cae-aa5c-5035d1705936"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.289507 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q" (OuterVolumeSpecName: "kube-api-access-lhm9q") pod "bf1537ab-4a36-4cae-aa5c-5035d1705936" (UID: "bf1537ab-4a36-4cae-aa5c-5035d1705936"). InnerVolumeSpecName "kube-api-access-lhm9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpcbl\" (UniqueName: \"kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385240 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1537ab-4a36-4cae-aa5c-5035d1705936-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385253 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhm9q\" (UniqueName: \"kubernetes.io/projected/bf1537ab-4a36-4cae-aa5c-5035d1705936-kube-api-access-lhm9q\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385262 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1537ab-4a36-4cae-aa5c-5035d1705936-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.385794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.386411 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.408064 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpcbl\" (UniqueName: \"kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl\") pod \"redhat-marketplace-2qpx4\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.430644 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:18:16 crc kubenswrapper[4688]: W0930 17:18:16.445859 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd20b8171_f309_4ae0_bde8_55dfd73aa45f.slice/crio-8e506d31fb9590e5e7d0b4badfb2ee3148941f3185cc46daf2bc5b9ed65127b3 WatchSource:0}: Error finding container 8e506d31fb9590e5e7d0b4badfb2ee3148941f3185cc46daf2bc5b9ed65127b3: Status 404 returned error can't find the container with id 8e506d31fb9590e5e7d0b4badfb2ee3148941f3185cc46daf2bc5b9ed65127b3 Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.505958 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.770134 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:18:16 crc kubenswrapper[4688]: E0930 17:18:16.771984 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1537ab-4a36-4cae-aa5c-5035d1705936" containerName="collect-profiles" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.772006 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1537ab-4a36-4cae-aa5c-5035d1705936" containerName="collect-profiles" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.772160 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1537ab-4a36-4cae-aa5c-5035d1705936" containerName="collect-profiles" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.773116 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.776065 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.778784 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.793399 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.793481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjh4\" (UniqueName: \"kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.793510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.894482 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjh4\" (UniqueName: \"kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.894551 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.894654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.895695 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.899045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.934625 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjh4\" (UniqueName: \"kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4\") pod \"redhat-operators-28lnl\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:16 crc kubenswrapper[4688]: I0930 17:18:16.972426 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.042552 4688 generic.go:334] "Generic (PLEG): container finished" podID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerID="f48aa37de1f30a6a662033f859ccf1460f8590431bc0496bcdf8229f7932a32e" exitCode=0 Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.042677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerDied","Data":"f48aa37de1f30a6a662033f859ccf1460f8590431bc0496bcdf8229f7932a32e"} Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.042708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerStarted","Data":"8e506d31fb9590e5e7d0b4badfb2ee3148941f3185cc46daf2bc5b9ed65127b3"} Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.053954 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:17 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:17 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:17 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.054050 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.055143 4688 generic.go:334] "Generic (PLEG): container finished" podID="83c648e6-0ccd-4b20-a724-1608c36fdfaa" containerID="c97067b978ce601e9fd6fd0409bd2e7dde84f8d3d12b3a060c6e1bc5455faca4" exitCode=0 Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.055195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"83c648e6-0ccd-4b20-a724-1608c36fdfaa","Type":"ContainerDied","Data":"c97067b978ce601e9fd6fd0409bd2e7dde84f8d3d12b3a060c6e1bc5455faca4"} Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.076239 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" event={"ID":"bf1537ab-4a36-4cae-aa5c-5035d1705936","Type":"ContainerDied","Data":"f3adbc73e1f98c6275be2e2d5834c119723e20f2d70a2bdd2dfd18a8b06b118a"} Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.076281 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3adbc73e1f98c6275be2e2d5834c119723e20f2d70a2bdd2dfd18a8b06b118a" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.076352 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.085873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerStarted","Data":"d31ba4fd8863909b23cab9e2498d17fa47234f6790bbb04451b085d60e4a5c17"} Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.099839 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.164913 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.170042 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.201476 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.299601 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.299679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.299705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pt4g\" (UniqueName: \"kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.340988 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.422182 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access\") pod \"81652315-c167-4821-a265-3b415a0611e3\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.422364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir\") pod \"81652315-c167-4821-a265-3b415a0611e3\" (UID: \"81652315-c167-4821-a265-3b415a0611e3\") " Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.435846 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "81652315-c167-4821-a265-3b415a0611e3" (UID: "81652315-c167-4821-a265-3b415a0611e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.436077 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "81652315-c167-4821-a265-3b415a0611e3" (UID: "81652315-c167-4821-a265-3b415a0611e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.437052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.437158 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.437179 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pt4g\" (UniqueName: \"kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.437324 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81652315-c167-4821-a265-3b415a0611e3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.437337 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81652315-c167-4821-a265-3b415a0611e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.438384 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.465563 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.471006 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pt4g\" (UniqueName: \"kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g\") pod \"redhat-operators-grkpw\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.501390 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.526196 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:18:17 crc kubenswrapper[4688]: W0930 17:18:17.633970 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb75d997_dcba_4913_a3de_e5eeaa27ecbc.slice/crio-ee2c8aafbd66bc4e6e9c0813d84ca92067518343d7dccdbc0ebe1ed953d0e2b0 WatchSource:0}: Error finding container ee2c8aafbd66bc4e6e9c0813d84ca92067518343d7dccdbc0ebe1ed953d0e2b0: Status 404 returned error can't find the container with id ee2c8aafbd66bc4e6e9c0813d84ca92067518343d7dccdbc0ebe1ed953d0e2b0 Sep 30 17:18:17 crc kubenswrapper[4688]: I0930 17:18:17.857465 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:18:17 crc kubenswrapper[4688]: W0930 17:18:17.868913 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0a597c2_90ff_4eb6_93d5_cafd73bdd02e.slice/crio-529b0fefe3349e11a49addb2c08be8054b74a49f92613e176f8151c5ae8f6f3d WatchSource:0}: Error finding container 529b0fefe3349e11a49addb2c08be8054b74a49f92613e176f8151c5ae8f6f3d: Status 404 returned error can't find the container with id 529b0fefe3349e11a49addb2c08be8054b74a49f92613e176f8151c5ae8f6f3d Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.050199 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:18 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:18 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:18 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.050264 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.097895 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.097878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"81652315-c167-4821-a265-3b415a0611e3","Type":"ContainerDied","Data":"81034525f2240d8854a87ece99f097ec9bd486f9260532e7207ff49ce6ff50f7"} Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.098097 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81034525f2240d8854a87ece99f097ec9bd486f9260532e7207ff49ce6ff50f7" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.102355 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerStarted","Data":"529b0fefe3349e11a49addb2c08be8054b74a49f92613e176f8151c5ae8f6f3d"} Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.104233 4688 generic.go:334] "Generic (PLEG): container finished" podID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerID="04f5269a0a69bacbd3d77adf6fbd98462b4cae7650fa75598b999fec771943c6" exitCode=0 Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.104299 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerDied","Data":"04f5269a0a69bacbd3d77adf6fbd98462b4cae7650fa75598b999fec771943c6"} Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.104330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerStarted","Data":"ee2c8aafbd66bc4e6e9c0813d84ca92067518343d7dccdbc0ebe1ed953d0e2b0"} Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.111724 4688 generic.go:334] "Generic (PLEG): container finished" podID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerID="e77e83f4c5af807a71bbe8e82a7f9fa5835409e51e91e9ab234b5ef0cc60bdc9" exitCode=0 Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.112228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerDied","Data":"e77e83f4c5af807a71bbe8e82a7f9fa5835409e51e91e9ab234b5ef0cc60bdc9"} Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.360628 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.415833 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.420823 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vxw96" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.455242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir\") pod \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.455333 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access\") pod \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\" (UID: \"83c648e6-0ccd-4b20-a724-1608c36fdfaa\") " Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.456034 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "83c648e6-0ccd-4b20-a724-1608c36fdfaa" (UID: "83c648e6-0ccd-4b20-a724-1608c36fdfaa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.489115 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "83c648e6-0ccd-4b20-a724-1608c36fdfaa" (UID: "83c648e6-0ccd-4b20-a724-1608c36fdfaa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.557404 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:18 crc kubenswrapper[4688]: I0930 17:18:18.557446 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/83c648e6-0ccd-4b20-a724-1608c36fdfaa-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.049536 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:19 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:19 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:19 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.049609 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.126715 4688 generic.go:334] "Generic (PLEG): container finished" podID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerID="aee43419a81dcd42830fc4fe1f164d92c6fedad6c902605602a3701691644823" exitCode=0 Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.126795 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerDied","Data":"aee43419a81dcd42830fc4fe1f164d92c6fedad6c902605602a3701691644823"} Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.137243 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.137226 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"83c648e6-0ccd-4b20-a724-1608c36fdfaa","Type":"ContainerDied","Data":"5534f7e9fa3bec9646089369afdf065491d003b1b317f3d0c3ecb0f25ad1a173"} Sep 30 17:18:19 crc kubenswrapper[4688]: I0930 17:18:19.138073 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5534f7e9fa3bec9646089369afdf065491d003b1b317f3d0c3ecb0f25ad1a173" Sep 30 17:18:20 crc kubenswrapper[4688]: I0930 17:18:20.049182 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:20 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:20 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:20 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:20 crc kubenswrapper[4688]: I0930 17:18:20.049259 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:20 crc kubenswrapper[4688]: I0930 17:18:20.182650 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-65pp7" Sep 30 17:18:21 crc kubenswrapper[4688]: I0930 17:18:21.048753 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:21 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:21 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:21 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:21 crc kubenswrapper[4688]: I0930 17:18:21.048834 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:22 crc kubenswrapper[4688]: I0930 17:18:22.049206 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:22 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:22 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:22 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:22 crc kubenswrapper[4688]: I0930 17:18:22.049283 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:22 crc kubenswrapper[4688]: I0930 17:18:22.545681 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:18:22 crc kubenswrapper[4688]: I0930 17:18:22.546146 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:18:23 crc kubenswrapper[4688]: I0930 17:18:23.049805 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:23 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:23 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:23 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:23 crc kubenswrapper[4688]: I0930 17:18:23.049886 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:24 crc kubenswrapper[4688]: I0930 17:18:24.048711 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:24 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:24 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:24 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:24 crc kubenswrapper[4688]: I0930 17:18:24.048803 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:24 crc kubenswrapper[4688]: I0930 17:18:24.364984 4688 patch_prober.go:28] interesting pod/console-f9d7485db-wl7nh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Sep 30 17:18:24 crc kubenswrapper[4688]: I0930 17:18:24.365080 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wl7nh" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Sep 30 17:18:24 crc kubenswrapper[4688]: I0930 17:18:24.398234 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t6z28" Sep 30 17:18:25 crc kubenswrapper[4688]: I0930 17:18:25.049952 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:25 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:25 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:25 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:25 crc kubenswrapper[4688]: I0930 17:18:25.050747 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:26 crc kubenswrapper[4688]: I0930 17:18:26.049597 4688 patch_prober.go:28] interesting pod/router-default-5444994796-bhb8d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 17:18:26 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Sep 30 17:18:26 crc kubenswrapper[4688]: [+]process-running ok Sep 30 17:18:26 crc kubenswrapper[4688]: healthz check failed Sep 30 17:18:26 crc kubenswrapper[4688]: I0930 17:18:26.049697 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bhb8d" podUID="22bdca4c-6929-427a-93bd-6b39d5b1376d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 17:18:26 crc kubenswrapper[4688]: I0930 17:18:26.072049 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:18:26 crc kubenswrapper[4688]: I0930 17:18:26.094165 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ba3e9d-4109-4a56-aac3-45917eec36d7-metrics-certs\") pod \"network-metrics-daemon-gbfmv\" (UID: \"23ba3e9d-4109-4a56-aac3-45917eec36d7\") " pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:18:26 crc kubenswrapper[4688]: I0930 17:18:26.376027 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gbfmv" Sep 30 17:18:27 crc kubenswrapper[4688]: I0930 17:18:27.049944 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:27 crc kubenswrapper[4688]: I0930 17:18:27.053170 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bhb8d" Sep 30 17:18:34 crc kubenswrapper[4688]: I0930 17:18:34.079652 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:18:34 crc kubenswrapper[4688]: I0930 17:18:34.369561 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:34 crc kubenswrapper[4688]: I0930 17:18:34.373972 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:18:39 crc kubenswrapper[4688]: E0930 17:18:39.564921 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 17:18:39 crc kubenswrapper[4688]: E0930 17:18:39.565553 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcpsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fh8m7_openshift-marketplace(a8846911-0e30-4c42-9703-08f18f377ee4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:39 crc kubenswrapper[4688]: E0930 17:18:39.566921 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fh8m7" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" Sep 30 17:18:44 crc kubenswrapper[4688]: I0930 17:18:44.716159 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmr2p" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.035228 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fh8m7" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.130995 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.131363 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmmcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q72fn_openshift-marketplace(43f69a53-da75-418d-a8c6-e0557d204bd3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.132605 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q72fn" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.379444 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.379987 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbdjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ntjl4_openshift-marketplace(f56e08ff-acbe-425f-9570-8ce3a06208b5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:45 crc kubenswrapper[4688]: E0930 17:18:45.381252 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ntjl4" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" Sep 30 17:18:45 crc kubenswrapper[4688]: I0930 17:18:45.454608 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gbfmv"] Sep 30 17:18:51 crc kubenswrapper[4688]: I0930 17:18:51.374075 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 17:18:52 crc kubenswrapper[4688]: E0930 17:18:52.157925 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q72fn" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" Sep 30 17:18:52 crc kubenswrapper[4688]: E0930 17:18:52.158033 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ntjl4" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" Sep 30 17:18:52 crc kubenswrapper[4688]: W0930 17:18:52.170454 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ba3e9d_4109_4a56_aac3_45917eec36d7.slice/crio-077c8420ac5362cf669058329a0bbbbb3528f675819273a5fcd0c163d48a06be WatchSource:0}: Error finding container 077c8420ac5362cf669058329a0bbbbb3528f675819273a5fcd0c163d48a06be: Status 404 returned error can't find the container with id 077c8420ac5362cf669058329a0bbbbb3528f675819273a5fcd0c163d48a06be Sep 30 17:18:52 crc kubenswrapper[4688]: I0930 17:18:52.365601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" event={"ID":"23ba3e9d-4109-4a56-aac3-45917eec36d7","Type":"ContainerStarted","Data":"077c8420ac5362cf669058329a0bbbbb3528f675819273a5fcd0c163d48a06be"} Sep 30 17:18:52 crc kubenswrapper[4688]: I0930 17:18:52.545506 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:18:52 crc kubenswrapper[4688]: I0930 17:18:52.545653 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:18:52 crc kubenswrapper[4688]: E0930 17:18:52.937791 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 17:18:52 crc kubenswrapper[4688]: E0930 17:18:52.938000 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjjh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-28lnl_openshift-marketplace(bb75d997-dcba-4913-a3de-e5eeaa27ecbc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:52 crc kubenswrapper[4688]: E0930 17:18:52.939147 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-28lnl" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" Sep 30 17:18:53 crc kubenswrapper[4688]: E0930 17:18:53.254267 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 17:18:53 crc kubenswrapper[4688]: E0930 17:18:53.255081 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pt4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-grkpw_openshift-marketplace(a0a597c2-90ff-4eb6-93d5-cafd73bdd02e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:53 crc kubenswrapper[4688]: E0930 17:18:53.256343 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-grkpw" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" Sep 30 17:18:53 crc kubenswrapper[4688]: E0930 17:18:53.639162 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-grkpw" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" Sep 30 17:18:53 crc kubenswrapper[4688]: E0930 17:18:53.639262 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-28lnl" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" Sep 30 17:18:54 crc kubenswrapper[4688]: E0930 17:18:54.295070 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 30 17:18:54 crc kubenswrapper[4688]: E0930 17:18:54.295334 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpcbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2qpx4_openshift-marketplace(2dac7937-97ad-4f11-bfb8-c0bd8d558182): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 17:18:54 crc kubenswrapper[4688]: E0930 17:18:54.296649 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2qpx4" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" Sep 30 17:18:54 crc kubenswrapper[4688]: E0930 17:18:54.663676 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2qpx4" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" Sep 30 17:18:55 crc kubenswrapper[4688]: I0930 17:18:55.387920 4688 generic.go:334] "Generic (PLEG): container finished" podID="5879067e-2c76-404f-a70f-85af13c713b9" containerID="39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23" exitCode=0 Sep 30 17:18:55 crc kubenswrapper[4688]: I0930 17:18:55.388019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerDied","Data":"39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23"} Sep 30 17:18:55 crc kubenswrapper[4688]: I0930 17:18:55.400269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerDied","Data":"48776ebf1cfa042613d0e2bfe2818a05018a94da495145766959ead9d0269f57"} Sep 30 17:18:55 crc kubenswrapper[4688]: I0930 17:18:55.399685 4688 generic.go:334] "Generic (PLEG): container finished" podID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerID="48776ebf1cfa042613d0e2bfe2818a05018a94da495145766959ead9d0269f57" exitCode=0 Sep 30 17:18:55 crc kubenswrapper[4688]: I0930 17:18:55.404000 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" event={"ID":"23ba3e9d-4109-4a56-aac3-45917eec36d7","Type":"ContainerStarted","Data":"3eed03fc2b6baaebd7668ab16e0a8ba7bbd17434fbabfa384bd64b28f09200ac"} Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.412477 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerStarted","Data":"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965"} Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.416433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerStarted","Data":"579d598e333c06fac34e2d11259bb27b76fdc312a3f0ff980b2c357ceac5642d"} Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.422292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gbfmv" event={"ID":"23ba3e9d-4109-4a56-aac3-45917eec36d7","Type":"ContainerStarted","Data":"c1a02ebdc42570005f79c64bbb76a115ddc3d0eba72860d4b7b37c3329e8d21d"} Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.451337 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgnxv" podStartSLOduration=2.493444127 podStartE2EDuration="42.451307146s" podCreationTimestamp="2025-09-30 17:18:14 +0000 UTC" firstStartedPulling="2025-09-30 17:18:16.012498159 +0000 UTC m=+153.307935717" lastFinishedPulling="2025-09-30 17:18:55.970361178 +0000 UTC m=+193.265798736" observedRunningTime="2025-09-30 17:18:56.435044291 +0000 UTC m=+193.730481849" watchObservedRunningTime="2025-09-30 17:18:56.451307146 +0000 UTC m=+193.746744714" Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.453292 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gbfmv" podStartSLOduration=173.453279738 podStartE2EDuration="2m53.453279738s" podCreationTimestamp="2025-09-30 17:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:18:56.450254179 +0000 UTC m=+193.745691737" watchObservedRunningTime="2025-09-30 17:18:56.453279738 +0000 UTC m=+193.748717306" Sep 30 17:18:56 crc kubenswrapper[4688]: I0930 17:18:56.469223 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w94j6" podStartSLOduration=2.398984311 podStartE2EDuration="41.469197774s" podCreationTimestamp="2025-09-30 17:18:15 +0000 UTC" firstStartedPulling="2025-09-30 17:18:17.044290525 +0000 UTC m=+154.339728083" lastFinishedPulling="2025-09-30 17:18:56.114503988 +0000 UTC m=+193.409941546" observedRunningTime="2025-09-30 17:18:56.46673782 +0000 UTC m=+193.762175388" watchObservedRunningTime="2025-09-30 17:18:56.469197774 +0000 UTC m=+193.764635322" Sep 30 17:19:02 crc kubenswrapper[4688]: I0930 17:19:02.460982 4688 generic.go:334] "Generic (PLEG): container finished" podID="a8846911-0e30-4c42-9703-08f18f377ee4" containerID="7d0dde013eec0dfd36850a9b5aef4057bdbe292c6aa5a7da7e25edd58c9a68c7" exitCode=0 Sep 30 17:19:02 crc kubenswrapper[4688]: I0930 17:19:02.461058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerDied","Data":"7d0dde013eec0dfd36850a9b5aef4057bdbe292c6aa5a7da7e25edd58c9a68c7"} Sep 30 17:19:03 crc kubenswrapper[4688]: I0930 17:19:03.468025 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerStarted","Data":"b361802751ffc4faed583c6cee71382f3fd5eb2c8d428e74ab33e27909f7f55c"} Sep 30 17:19:03 crc kubenswrapper[4688]: I0930 17:19:03.490912 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fh8m7" podStartSLOduration=2.279603019 podStartE2EDuration="50.490889717s" podCreationTimestamp="2025-09-30 17:18:13 +0000 UTC" firstStartedPulling="2025-09-30 17:18:14.985455056 +0000 UTC m=+152.280892614" lastFinishedPulling="2025-09-30 17:19:03.196741764 +0000 UTC m=+200.492179312" observedRunningTime="2025-09-30 17:19:03.488825272 +0000 UTC m=+200.784262840" watchObservedRunningTime="2025-09-30 17:19:03.490889717 +0000 UTC m=+200.786327275" Sep 30 17:19:03 crc kubenswrapper[4688]: I0930 17:19:03.883232 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:19:03 crc kubenswrapper[4688]: I0930 17:19:03.883292 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:19:04 crc kubenswrapper[4688]: I0930 17:19:04.538520 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:04 crc kubenswrapper[4688]: I0930 17:19:04.539102 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:04 crc kubenswrapper[4688]: I0930 17:19:04.595173 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:05 crc kubenswrapper[4688]: I0930 17:19:05.126485 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fh8m7" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="registry-server" probeResult="failure" output=< Sep 30 17:19:05 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 17:19:05 crc kubenswrapper[4688]: > Sep 30 17:19:05 crc kubenswrapper[4688]: I0930 17:19:05.526402 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:05 crc kubenswrapper[4688]: I0930 17:19:05.829286 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:19:06 crc kubenswrapper[4688]: I0930 17:19:06.106513 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:19:06 crc kubenswrapper[4688]: I0930 17:19:06.106638 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:19:06 crc kubenswrapper[4688]: I0930 17:19:06.182258 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:19:06 crc kubenswrapper[4688]: I0930 17:19:06.533655 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:19:07 crc kubenswrapper[4688]: I0930 17:19:07.494409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerStarted","Data":"e3a7855486813cfd84ccc7bf315d1e2d3f1833472cae37f7d9714c219925ec6b"} Sep 30 17:19:07 crc kubenswrapper[4688]: I0930 17:19:07.501446 4688 generic.go:334] "Generic (PLEG): container finished" podID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerID="bbf731940390f13cf2ab11d5a84ba46479980019e38b2e4ca22ccb8bb80bfb38" exitCode=0 Sep 30 17:19:07 crc kubenswrapper[4688]: I0930 17:19:07.501554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerDied","Data":"bbf731940390f13cf2ab11d5a84ba46479980019e38b2e4ca22ccb8bb80bfb38"} Sep 30 17:19:07 crc kubenswrapper[4688]: I0930 17:19:07.501871 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jgnxv" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="registry-server" containerID="cri-o://c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965" gracePeriod=2 Sep 30 17:19:07 crc kubenswrapper[4688]: I0930 17:19:07.885139 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.025331 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content\") pod \"5879067e-2c76-404f-a70f-85af13c713b9\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.025609 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities\") pod \"5879067e-2c76-404f-a70f-85af13c713b9\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.025716 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqdpx\" (UniqueName: \"kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx\") pod \"5879067e-2c76-404f-a70f-85af13c713b9\" (UID: \"5879067e-2c76-404f-a70f-85af13c713b9\") " Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.026320 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities" (OuterVolumeSpecName: "utilities") pod "5879067e-2c76-404f-a70f-85af13c713b9" (UID: "5879067e-2c76-404f-a70f-85af13c713b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.038172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx" (OuterVolumeSpecName: "kube-api-access-nqdpx") pod "5879067e-2c76-404f-a70f-85af13c713b9" (UID: "5879067e-2c76-404f-a70f-85af13c713b9"). InnerVolumeSpecName "kube-api-access-nqdpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.072639 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5879067e-2c76-404f-a70f-85af13c713b9" (UID: "5879067e-2c76-404f-a70f-85af13c713b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.128041 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.128073 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5879067e-2c76-404f-a70f-85af13c713b9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.128085 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqdpx\" (UniqueName: \"kubernetes.io/projected/5879067e-2c76-404f-a70f-85af13c713b9-kube-api-access-nqdpx\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.508503 4688 generic.go:334] "Generic (PLEG): container finished" podID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerID="e3a7855486813cfd84ccc7bf315d1e2d3f1833472cae37f7d9714c219925ec6b" exitCode=0 Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.508576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerDied","Data":"e3a7855486813cfd84ccc7bf315d1e2d3f1833472cae37f7d9714c219925ec6b"} Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.510832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerStarted","Data":"5907a3114b2c689a10e78e85ef6852add51558e011a7c4711a4ca43f093b1c08"} Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.518230 4688 generic.go:334] "Generic (PLEG): container finished" podID="5879067e-2c76-404f-a70f-85af13c713b9" containerID="c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965" exitCode=0 Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.518279 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerDied","Data":"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965"} Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.518314 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgnxv" event={"ID":"5879067e-2c76-404f-a70f-85af13c713b9","Type":"ContainerDied","Data":"a635657ec9e0b3c1f08dfc81ffbc964428eae7d88d9949878f8503515971c7aa"} Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.518368 4688 scope.go:117] "RemoveContainer" containerID="c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.518500 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgnxv" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.558750 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2qpx4" podStartSLOduration=2.794748337 podStartE2EDuration="52.558728862s" podCreationTimestamp="2025-09-30 17:18:16 +0000 UTC" firstStartedPulling="2025-09-30 17:18:18.116290214 +0000 UTC m=+155.411727772" lastFinishedPulling="2025-09-30 17:19:07.880270739 +0000 UTC m=+205.175708297" observedRunningTime="2025-09-30 17:19:08.55527406 +0000 UTC m=+205.850711648" watchObservedRunningTime="2025-09-30 17:19:08.558728862 +0000 UTC m=+205.854166430" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.571243 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.590363 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jgnxv"] Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.598916 4688 scope.go:117] "RemoveContainer" containerID="39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.631891 4688 scope.go:117] "RemoveContainer" containerID="4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.669125 4688 scope.go:117] "RemoveContainer" containerID="c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965" Sep 30 17:19:08 crc kubenswrapper[4688]: E0930 17:19:08.669945 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965\": container with ID starting with c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965 not found: ID does not exist" containerID="c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.670008 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965"} err="failed to get container status \"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965\": rpc error: code = NotFound desc = could not find container \"c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965\": container with ID starting with c1d271f448a49cdcbd9f7ad824e1c7eb0386b90ce4be23366696a5546d71d965 not found: ID does not exist" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.670083 4688 scope.go:117] "RemoveContainer" containerID="39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23" Sep 30 17:19:08 crc kubenswrapper[4688]: E0930 17:19:08.671771 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23\": container with ID starting with 39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23 not found: ID does not exist" containerID="39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.671822 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23"} err="failed to get container status \"39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23\": rpc error: code = NotFound desc = could not find container \"39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23\": container with ID starting with 39d50eb958df4349aef0079d10aac80ff41435a32b3e335d9a5bef51c29ccf23 not found: ID does not exist" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.671856 4688 scope.go:117] "RemoveContainer" containerID="4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48" Sep 30 17:19:08 crc kubenswrapper[4688]: E0930 17:19:08.675354 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48\": container with ID starting with 4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48 not found: ID does not exist" containerID="4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48" Sep 30 17:19:08 crc kubenswrapper[4688]: I0930 17:19:08.675540 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48"} err="failed to get container status \"4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48\": rpc error: code = NotFound desc = could not find container \"4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48\": container with ID starting with 4fb3be1282b94e0d7fd45a7fe0fff075f5d8ff3c19ea0d9c7337800e7bda6e48 not found: ID does not exist" Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.440778 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5879067e-2c76-404f-a70f-85af13c713b9" path="/var/lib/kubelet/pods/5879067e-2c76-404f-a70f-85af13c713b9/volumes" Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.529789 4688 generic.go:334] "Generic (PLEG): container finished" podID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerID="8eecc5eb101c8821f30c22074ccf6650b9e61e83dd627205c50e58e3c9680dc5" exitCode=0 Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.529900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerDied","Data":"8eecc5eb101c8821f30c22074ccf6650b9e61e83dd627205c50e58e3c9680dc5"} Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.535441 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerStarted","Data":"85ab9e7789aa905ea8391b9c816ad937fa812fb8a9ee67a44b1005e761e4bd02"} Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.537735 4688 generic.go:334] "Generic (PLEG): container finished" podID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerID="e3757db0b3b86b03896e8b2761842626bd487446116a83c7c918612335d01c45" exitCode=0 Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.537765 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerDied","Data":"e3757db0b3b86b03896e8b2761842626bd487446116a83c7c918612335d01c45"} Sep 30 17:19:09 crc kubenswrapper[4688]: I0930 17:19:09.597813 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ntjl4" podStartSLOduration=2.583563397 podStartE2EDuration="56.597785054s" podCreationTimestamp="2025-09-30 17:18:13 +0000 UTC" firstStartedPulling="2025-09-30 17:18:14.979189902 +0000 UTC m=+152.274627460" lastFinishedPulling="2025-09-30 17:19:08.993411559 +0000 UTC m=+206.288849117" observedRunningTime="2025-09-30 17:19:09.588714794 +0000 UTC m=+206.884152382" watchObservedRunningTime="2025-09-30 17:19:09.597785054 +0000 UTC m=+206.893222642" Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.548307 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerStarted","Data":"780bd376c7f6739ee92d44051aa2005d182dcf75b869158ff5b96a686cec9bb9"} Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.551117 4688 generic.go:334] "Generic (PLEG): container finished" podID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerID="4dce637e9726aac363bf61e25d77ad219435fea50d1e185f7f8ee4aeaf683991" exitCode=0 Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.551201 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerDied","Data":"4dce637e9726aac363bf61e25d77ad219435fea50d1e185f7f8ee4aeaf683991"} Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.553837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerStarted","Data":"40bc259ed3260fd48f6e74e2c6730f4924a18d654fc352a2a02ab11843a186f4"} Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.578936 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28lnl" podStartSLOduration=2.633481926 podStartE2EDuration="54.578910407s" podCreationTimestamp="2025-09-30 17:18:16 +0000 UTC" firstStartedPulling="2025-09-30 17:18:18.10659075 +0000 UTC m=+155.402028308" lastFinishedPulling="2025-09-30 17:19:10.052019231 +0000 UTC m=+207.347456789" observedRunningTime="2025-09-30 17:19:10.571904771 +0000 UTC m=+207.867342349" watchObservedRunningTime="2025-09-30 17:19:10.578910407 +0000 UTC m=+207.874347975" Sep 30 17:19:10 crc kubenswrapper[4688]: I0930 17:19:10.623044 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q72fn" podStartSLOduration=3.696635244 podStartE2EDuration="57.623015279s" podCreationTimestamp="2025-09-30 17:18:13 +0000 UTC" firstStartedPulling="2025-09-30 17:18:16.03204215 +0000 UTC m=+153.327479708" lastFinishedPulling="2025-09-30 17:19:09.958422185 +0000 UTC m=+207.253859743" observedRunningTime="2025-09-30 17:19:10.618841188 +0000 UTC m=+207.914278766" watchObservedRunningTime="2025-09-30 17:19:10.623015279 +0000 UTC m=+207.918452847" Sep 30 17:19:11 crc kubenswrapper[4688]: I0930 17:19:11.564410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerStarted","Data":"6a8d25a3badf24ac979d71ef431f56065bef14aa430161013f0b2aa2b6812e92"} Sep 30 17:19:11 crc kubenswrapper[4688]: I0930 17:19:11.586470 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-grkpw" podStartSLOduration=6.589128125 podStartE2EDuration="54.586447822s" podCreationTimestamp="2025-09-30 17:18:17 +0000 UTC" firstStartedPulling="2025-09-30 17:18:22.986548328 +0000 UTC m=+160.281985876" lastFinishedPulling="2025-09-30 17:19:10.983868015 +0000 UTC m=+208.279305573" observedRunningTime="2025-09-30 17:19:11.58410415 +0000 UTC m=+208.879541738" watchObservedRunningTime="2025-09-30 17:19:11.586447822 +0000 UTC m=+208.881885370" Sep 30 17:19:13 crc kubenswrapper[4688]: I0930 17:19:13.932023 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:19:13 crc kubenswrapper[4688]: I0930 17:19:13.975523 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.139996 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.140349 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.183759 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.295144 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.295200 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.345674 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:14 crc kubenswrapper[4688]: I0930 17:19:14.625064 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:19:16 crc kubenswrapper[4688]: I0930 17:19:16.506314 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:16 crc kubenswrapper[4688]: I0930 17:19:16.507156 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:16 crc kubenswrapper[4688]: I0930 17:19:16.567096 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:16 crc kubenswrapper[4688]: I0930 17:19:16.638624 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.101304 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.101721 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.148987 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.502557 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.502658 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.549676 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.638602 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:19:17 crc kubenswrapper[4688]: I0930 17:19:17.639682 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:20 crc kubenswrapper[4688]: I0930 17:19:20.225558 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:19:20 crc kubenswrapper[4688]: I0930 17:19:20.226000 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2qpx4" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="registry-server" containerID="cri-o://5907a3114b2c689a10e78e85ef6852add51558e011a7c4711a4ca43f093b1c08" gracePeriod=2 Sep 30 17:19:20 crc kubenswrapper[4688]: I0930 17:19:20.425714 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:19:20 crc kubenswrapper[4688]: I0930 17:19:20.426611 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-grkpw" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="registry-server" containerID="cri-o://6a8d25a3badf24ac979d71ef431f56065bef14aa430161013f0b2aa2b6812e92" gracePeriod=2 Sep 30 17:19:21 crc kubenswrapper[4688]: I0930 17:19:21.629497 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerDied","Data":"5907a3114b2c689a10e78e85ef6852add51558e011a7c4711a4ca43f093b1c08"} Sep 30 17:19:21 crc kubenswrapper[4688]: I0930 17:19:21.629406 4688 generic.go:334] "Generic (PLEG): container finished" podID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerID="5907a3114b2c689a10e78e85ef6852add51558e011a7c4711a4ca43f093b1c08" exitCode=0 Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.014966 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.064149 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpcbl\" (UniqueName: \"kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl\") pod \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.064630 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities\") pod \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.064657 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content\") pod \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\" (UID: \"2dac7937-97ad-4f11-bfb8-c0bd8d558182\") " Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.072424 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl" (OuterVolumeSpecName: "kube-api-access-cpcbl") pod "2dac7937-97ad-4f11-bfb8-c0bd8d558182" (UID: "2dac7937-97ad-4f11-bfb8-c0bd8d558182"). InnerVolumeSpecName "kube-api-access-cpcbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.078688 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dac7937-97ad-4f11-bfb8-c0bd8d558182" (UID: "2dac7937-97ad-4f11-bfb8-c0bd8d558182"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.079665 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities" (OuterVolumeSpecName: "utilities") pod "2dac7937-97ad-4f11-bfb8-c0bd8d558182" (UID: "2dac7937-97ad-4f11-bfb8-c0bd8d558182"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.165847 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpcbl\" (UniqueName: \"kubernetes.io/projected/2dac7937-97ad-4f11-bfb8-c0bd8d558182-kube-api-access-cpcbl\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.165890 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.165900 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dac7937-97ad-4f11-bfb8-c0bd8d558182-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.545387 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.545472 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.545532 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.546209 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.546271 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568" gracePeriod=600 Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.636617 4688 generic.go:334] "Generic (PLEG): container finished" podID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerID="6a8d25a3badf24ac979d71ef431f56065bef14aa430161013f0b2aa2b6812e92" exitCode=0 Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.636694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerDied","Data":"6a8d25a3badf24ac979d71ef431f56065bef14aa430161013f0b2aa2b6812e92"} Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.638415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2qpx4" event={"ID":"2dac7937-97ad-4f11-bfb8-c0bd8d558182","Type":"ContainerDied","Data":"d31ba4fd8863909b23cab9e2498d17fa47234f6790bbb04451b085d60e4a5c17"} Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.638464 4688 scope.go:117] "RemoveContainer" containerID="5907a3114b2c689a10e78e85ef6852add51558e011a7c4711a4ca43f093b1c08" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.638625 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2qpx4" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.681264 4688 scope.go:117] "RemoveContainer" containerID="bbf731940390f13cf2ab11d5a84ba46479980019e38b2e4ca22ccb8bb80bfb38" Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.707602 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.710774 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2qpx4"] Sep 30 17:19:22 crc kubenswrapper[4688]: I0930 17:19:22.715032 4688 scope.go:117] "RemoveContainer" containerID="e77e83f4c5af807a71bbe8e82a7f9fa5835409e51e91e9ab234b5ef0cc60bdc9" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.116930 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.179534 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content\") pod \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.280587 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities\") pod \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.281240 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pt4g\" (UniqueName: \"kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g\") pod \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\" (UID: \"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e\") " Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.281353 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities" (OuterVolumeSpecName: "utilities") pod "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" (UID: "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.281547 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.286207 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g" (OuterVolumeSpecName: "kube-api-access-5pt4g") pod "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" (UID: "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e"). InnerVolumeSpecName "kube-api-access-5pt4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.382111 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pt4g\" (UniqueName: \"kubernetes.io/projected/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-kube-api-access-5pt4g\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.437663 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" path="/var/lib/kubelet/pods/2dac7937-97ad-4f11-bfb8-c0bd8d558182/volumes" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.633689 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.659816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkpw" event={"ID":"a0a597c2-90ff-4eb6-93d5-cafd73bdd02e","Type":"ContainerDied","Data":"529b0fefe3349e11a49addb2c08be8054b74a49f92613e176f8151c5ae8f6f3d"} Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.659906 4688 scope.go:117] "RemoveContainer" containerID="6a8d25a3badf24ac979d71ef431f56065bef14aa430161013f0b2aa2b6812e92" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.660068 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkpw" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.668413 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568" exitCode=0 Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.668471 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568"} Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.680676 4688 scope.go:117] "RemoveContainer" containerID="4dce637e9726aac363bf61e25d77ad219435fea50d1e185f7f8ee4aeaf683991" Sep 30 17:19:23 crc kubenswrapper[4688]: I0930 17:19:23.700544 4688 scope.go:117] "RemoveContainer" containerID="aee43419a81dcd42830fc4fe1f164d92c6fedad6c902605602a3701691644823" Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.017643 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" (UID: "a0a597c2-90ff-4eb6-93d5-cafd73bdd02e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.091276 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.296176 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.299114 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-grkpw"] Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.341130 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:24 crc kubenswrapper[4688]: I0930 17:19:24.677628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0"} Sep 30 17:19:25 crc kubenswrapper[4688]: I0930 17:19:25.449411 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" path="/var/lib/kubelet/pods/a0a597c2-90ff-4eb6-93d5-cafd73bdd02e/volumes" Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.436468 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.437102 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q72fn" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="registry-server" containerID="cri-o://40bc259ed3260fd48f6e74e2c6730f4924a18d654fc352a2a02ab11843a186f4" gracePeriod=2 Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.699398 4688 generic.go:334] "Generic (PLEG): container finished" podID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerID="40bc259ed3260fd48f6e74e2c6730f4924a18d654fc352a2a02ab11843a186f4" exitCode=0 Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.699468 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerDied","Data":"40bc259ed3260fd48f6e74e2c6730f4924a18d654fc352a2a02ab11843a186f4"} Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.796945 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.944276 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities\") pod \"43f69a53-da75-418d-a8c6-e0557d204bd3\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.945236 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities" (OuterVolumeSpecName: "utilities") pod "43f69a53-da75-418d-a8c6-e0557d204bd3" (UID: "43f69a53-da75-418d-a8c6-e0557d204bd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.945440 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content\") pod \"43f69a53-da75-418d-a8c6-e0557d204bd3\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.950765 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmmcl\" (UniqueName: \"kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl\") pod \"43f69a53-da75-418d-a8c6-e0557d204bd3\" (UID: \"43f69a53-da75-418d-a8c6-e0557d204bd3\") " Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.951214 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.958091 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl" (OuterVolumeSpecName: "kube-api-access-bmmcl") pod "43f69a53-da75-418d-a8c6-e0557d204bd3" (UID: "43f69a53-da75-418d-a8c6-e0557d204bd3"). InnerVolumeSpecName "kube-api-access-bmmcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:27 crc kubenswrapper[4688]: I0930 17:19:27.992674 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43f69a53-da75-418d-a8c6-e0557d204bd3" (UID: "43f69a53-da75-418d-a8c6-e0557d204bd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.052157 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f69a53-da75-418d-a8c6-e0557d204bd3-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.052932 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmmcl\" (UniqueName: \"kubernetes.io/projected/43f69a53-da75-418d-a8c6-e0557d204bd3-kube-api-access-bmmcl\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.705978 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q72fn" event={"ID":"43f69a53-da75-418d-a8c6-e0557d204bd3","Type":"ContainerDied","Data":"ebaa390e4f77664199cd23342933cb17757eb474438729ab0db1027e28fa4a73"} Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.706050 4688 scope.go:117] "RemoveContainer" containerID="40bc259ed3260fd48f6e74e2c6730f4924a18d654fc352a2a02ab11843a186f4" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.706214 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q72fn" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.732674 4688 scope.go:117] "RemoveContainer" containerID="e3757db0b3b86b03896e8b2761842626bd487446116a83c7c918612335d01c45" Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.741350 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.747849 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q72fn"] Sep 30 17:19:28 crc kubenswrapper[4688]: I0930 17:19:28.769906 4688 scope.go:117] "RemoveContainer" containerID="e1565c7bc4f1ccace256dcce2d6f2a07d40b4c8e9f918102b03266b1c801995e" Sep 30 17:19:29 crc kubenswrapper[4688]: I0930 17:19:29.439373 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" path="/var/lib/kubelet/pods/43f69a53-da75-418d-a8c6-e0557d204bd3/volumes" Sep 30 17:19:48 crc kubenswrapper[4688]: I0930 17:19:48.667531 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" podUID="7254866f-2108-4fc6-827a-5e177b48259d" containerName="oauth-openshift" containerID="cri-o://fb1129a61d6eb266910d39dad03881ea549772222aa139e34fb234fa8baefe57" gracePeriod=15 Sep 30 17:19:48 crc kubenswrapper[4688]: I0930 17:19:48.823240 4688 generic.go:334] "Generic (PLEG): container finished" podID="7254866f-2108-4fc6-827a-5e177b48259d" containerID="fb1129a61d6eb266910d39dad03881ea549772222aa139e34fb234fa8baefe57" exitCode=0 Sep 30 17:19:48 crc kubenswrapper[4688]: I0930 17:19:48.823299 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" event={"ID":"7254866f-2108-4fc6-827a-5e177b48259d","Type":"ContainerDied","Data":"fb1129a61d6eb266910d39dad03881ea549772222aa139e34fb234fa8baefe57"} Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.131705 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.170003 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j"] Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.170673 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.170808 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.170919 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c648e6-0ccd-4b20-a724-1608c36fdfaa" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.170982 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c648e6-0ccd-4b20-a724-1608c36fdfaa" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171062 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171137 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171212 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81652315-c167-4821-a265-3b415a0611e3" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171283 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="81652315-c167-4821-a265-3b415a0611e3" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171361 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171434 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171509 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171588 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171651 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171722 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171807 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171873 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.171929 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.171991 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172046 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172097 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172159 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172223 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="extract-utilities" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172285 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172342 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172397 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7254866f-2108-4fc6-827a-5e177b48259d" containerName="oauth-openshift" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172453 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7254866f-2108-4fc6-827a-5e177b48259d" containerName="oauth-openshift" Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172517 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172613 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172686 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172629 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172771 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: E0930 17:19:49.172777 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172794 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="extract-content" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172907 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172989 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.173074 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.172962 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="81652315-c167-4821-a265-3b415a0611e3" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174189 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7254866f-2108-4fc6-827a-5e177b48259d" containerName="oauth-openshift" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174245 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c648e6-0ccd-4b20-a724-1608c36fdfaa" containerName="pruner" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174267 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f69a53-da75-418d-a8c6-e0557d204bd3" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174276 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5879067e-2c76-404f-a70f-85af13c713b9" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174285 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a597c2-90ff-4eb6-93d5-cafd73bdd02e" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.173099 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174359 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174445 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174549 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174074 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174885 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.174326 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dac7937-97ad-4f11-bfb8-c0bd8d558182" containerName="registry-server" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175123 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn9rl\" (UniqueName: \"kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175154 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error\") pod \"7254866f-2108-4fc6-827a-5e177b48259d\" (UID: \"7254866f-2108-4fc6-827a-5e177b48259d\") " Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175613 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175663 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175675 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175686 4688 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7254866f-2108-4fc6-827a-5e177b48259d-audit-dir\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.175974 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.178705 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.182285 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j"] Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.189682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.191168 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl" (OuterVolumeSpecName: "kube-api-access-bn9rl") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "kube-api-access-bn9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.191450 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.191989 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.192472 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.192627 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.192906 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.208736 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.209606 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7254866f-2108-4fc6-827a-5e177b48259d" (UID: "7254866f-2108-4fc6-827a-5e177b48259d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.276821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2sbc\" (UniqueName: \"kubernetes.io/projected/511ecb8d-ebb9-4a65-b22c-9d95b757d831-kube-api-access-p2sbc\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.276885 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.276913 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.276944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-dir\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.276974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277020 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-policies\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-session\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277097 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277176 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277202 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277332 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn9rl\" (UniqueName: \"kubernetes.io/projected/7254866f-2108-4fc6-827a-5e177b48259d-kube-api-access-bn9rl\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277369 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277383 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277393 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277405 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277414 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277425 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277437 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277447 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.277459 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7254866f-2108-4fc6-827a-5e177b48259d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.378666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-policies\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-session\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379202 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379663 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379840 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2sbc\" (UniqueName: \"kubernetes.io/projected/511ecb8d-ebb9-4a65-b22c-9d95b757d831-kube-api-access-p2sbc\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379871 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379900 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379929 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-dir\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.379958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.380256 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-dir\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.381627 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.381819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-audit-policies\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.382124 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.382762 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.383829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-error\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.385095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-session\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.385274 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.385328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-login\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.385766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.386014 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.386048 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.386804 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/511ecb8d-ebb9-4a65-b22c-9d95b757d831-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.396885 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2sbc\" (UniqueName: \"kubernetes.io/projected/511ecb8d-ebb9-4a65-b22c-9d95b757d831-kube-api-access-p2sbc\") pod \"oauth-openshift-7f8fc677f9-zpj2j\" (UID: \"511ecb8d-ebb9-4a65-b22c-9d95b757d831\") " pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.549292 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.830216 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" event={"ID":"7254866f-2108-4fc6-827a-5e177b48259d","Type":"ContainerDied","Data":"bc4e440fea841952de6f5e7908cdf9efea6eb505f06c7798c6795ccfa6a9a290"} Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.830295 4688 scope.go:117] "RemoveContainer" containerID="fb1129a61d6eb266910d39dad03881ea549772222aa139e34fb234fa8baefe57" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.830346 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7znjl" Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.868167 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:19:49 crc kubenswrapper[4688]: I0930 17:19:49.868883 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7znjl"] Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.036684 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j"] Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.839757 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" event={"ID":"511ecb8d-ebb9-4a65-b22c-9d95b757d831","Type":"ContainerStarted","Data":"fbd416242bb45bfc01c83dbd440b8b714d1f0feeb9704abc45ca0b6be135411c"} Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.840217 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.840230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" event={"ID":"511ecb8d-ebb9-4a65-b22c-9d95b757d831","Type":"ContainerStarted","Data":"fc3661258b385be9f5aa6023f1ddf12961505cdcc9cc1a37c865744c0b728055"} Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.846967 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" Sep 30 17:19:50 crc kubenswrapper[4688]: I0930 17:19:50.876365 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f8fc677f9-zpj2j" podStartSLOduration=27.876338637 podStartE2EDuration="27.876338637s" podCreationTimestamp="2025-09-30 17:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:19:50.875409613 +0000 UTC m=+248.170847171" watchObservedRunningTime="2025-09-30 17:19:50.876338637 +0000 UTC m=+248.171776195" Sep 30 17:19:51 crc kubenswrapper[4688]: I0930 17:19:51.443816 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7254866f-2108-4fc6-827a-5e177b48259d" path="/var/lib/kubelet/pods/7254866f-2108-4fc6-827a-5e177b48259d/volumes" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.562996 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.564075 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ntjl4" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="registry-server" containerID="cri-o://85ab9e7789aa905ea8391b9c816ad937fa812fb8a9ee67a44b1005e761e4bd02" gracePeriod=30 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.573938 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.574249 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fh8m7" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="registry-server" containerID="cri-o://b361802751ffc4faed583c6cee71382f3fd5eb2c8d428e74ab33e27909f7f55c" gracePeriod=30 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.587374 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.587741 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" containerID="cri-o://d2da5fb020cdf094a438204b12d743f9c0766657ff6f7359abb619f324a5fdbe" gracePeriod=30 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.594530 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.594859 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w94j6" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="registry-server" containerID="cri-o://579d598e333c06fac34e2d11259bb27b76fdc312a3f0ff980b2c357ceac5642d" gracePeriod=30 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.605118 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.605477 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-28lnl" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="registry-server" containerID="cri-o://780bd376c7f6739ee92d44051aa2005d182dcf75b869158ff5b96a686cec9bb9" gracePeriod=30 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.614932 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ddkjs"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.615710 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.631169 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ddkjs"] Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.716357 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.716470 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.716515 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwk4g\" (UniqueName: \"kubernetes.io/projected/f4669830-b238-42f8-8824-9aced2d9ca0a-kube-api-access-pwk4g\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.817109 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.817210 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.817243 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwk4g\" (UniqueName: \"kubernetes.io/projected/f4669830-b238-42f8-8824-9aced2d9ca0a-kube-api-access-pwk4g\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.818678 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.825742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4669830-b238-42f8-8824-9aced2d9ca0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.835899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwk4g\" (UniqueName: \"kubernetes.io/projected/f4669830-b238-42f8-8824-9aced2d9ca0a-kube-api-access-pwk4g\") pod \"marketplace-operator-79b997595-ddkjs\" (UID: \"f4669830-b238-42f8-8824-9aced2d9ca0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.938706 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.982753 4688 generic.go:334] "Generic (PLEG): container finished" podID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerID="d2da5fb020cdf094a438204b12d743f9c0766657ff6f7359abb619f324a5fdbe" exitCode=0 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.982831 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" event={"ID":"a171a3b9-c317-4aa7-84c8-f201dbaa653d","Type":"ContainerDied","Data":"d2da5fb020cdf094a438204b12d743f9c0766657ff6f7359abb619f324a5fdbe"} Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.987203 4688 generic.go:334] "Generic (PLEG): container finished" podID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerID="579d598e333c06fac34e2d11259bb27b76fdc312a3f0ff980b2c357ceac5642d" exitCode=0 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.987342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerDied","Data":"579d598e333c06fac34e2d11259bb27b76fdc312a3f0ff980b2c357ceac5642d"} Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.990641 4688 generic.go:334] "Generic (PLEG): container finished" podID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerID="85ab9e7789aa905ea8391b9c816ad937fa812fb8a9ee67a44b1005e761e4bd02" exitCode=0 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.990708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerDied","Data":"85ab9e7789aa905ea8391b9c816ad937fa812fb8a9ee67a44b1005e761e4bd02"} Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.990773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntjl4" event={"ID":"f56e08ff-acbe-425f-9570-8ce3a06208b5","Type":"ContainerDied","Data":"b66741d23a082c2ae7c826ec890bc22ab9eda0e1846a53a196bb2984dca0d2d2"} Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.990792 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b66741d23a082c2ae7c826ec890bc22ab9eda0e1846a53a196bb2984dca0d2d2" Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.994900 4688 generic.go:334] "Generic (PLEG): container finished" podID="a8846911-0e30-4c42-9703-08f18f377ee4" containerID="b361802751ffc4faed583c6cee71382f3fd5eb2c8d428e74ab33e27909f7f55c" exitCode=0 Sep 30 17:20:10 crc kubenswrapper[4688]: I0930 17:20:10.994987 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerDied","Data":"b361802751ffc4faed583c6cee71382f3fd5eb2c8d428e74ab33e27909f7f55c"} Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.018321 4688 generic.go:334] "Generic (PLEG): container finished" podID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerID="780bd376c7f6739ee92d44051aa2005d182dcf75b869158ff5b96a686cec9bb9" exitCode=0 Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.018391 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerDied","Data":"780bd376c7f6739ee92d44051aa2005d182dcf75b869158ff5b96a686cec9bb9"} Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.023511 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.027146 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd\") pod \"f56e08ff-acbe-425f-9570-8ce3a06208b5\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.027222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content\") pod \"f56e08ff-acbe-425f-9570-8ce3a06208b5\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.027242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities\") pod \"f56e08ff-acbe-425f-9570-8ce3a06208b5\" (UID: \"f56e08ff-acbe-425f-9570-8ce3a06208b5\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.034220 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd" (OuterVolumeSpecName: "kube-api-access-lbdjd") pod "f56e08ff-acbe-425f-9570-8ce3a06208b5" (UID: "f56e08ff-acbe-425f-9570-8ce3a06208b5"). InnerVolumeSpecName "kube-api-access-lbdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.034255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities" (OuterVolumeSpecName: "utilities") pod "f56e08ff-acbe-425f-9570-8ce3a06208b5" (UID: "f56e08ff-acbe-425f-9570-8ce3a06208b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.109343 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f56e08ff-acbe-425f-9570-8ce3a06208b5" (UID: "f56e08ff-acbe-425f-9570-8ce3a06208b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.128598 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/f56e08ff-acbe-425f-9570-8ce3a06208b5-kube-api-access-lbdjd\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.128637 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.128647 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f56e08ff-acbe-425f-9570-8ce3a06208b5-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.142885 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.153662 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.157393 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.193983 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.333842 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content\") pod \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.333934 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjjh4\" (UniqueName: \"kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4\") pod \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.333970 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpsq\" (UniqueName: \"kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq\") pod \"a8846911-0e30-4c42-9703-08f18f377ee4\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.334011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content\") pod \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.334063 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7n7x\" (UniqueName: \"kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x\") pod \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.334096 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities\") pod \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.334128 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics\") pod \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335152 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities" (OuterVolumeSpecName: "utilities") pod "d20b8171-f309-4ae0-bde8-55dfd73aa45f" (UID: "d20b8171-f309-4ae0-bde8-55dfd73aa45f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335211 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j28lt\" (UniqueName: \"kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt\") pod \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\" (UID: \"d20b8171-f309-4ae0-bde8-55dfd73aa45f\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335270 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content\") pod \"a8846911-0e30-4c42-9703-08f18f377ee4\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335306 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca\") pod \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\" (UID: \"a171a3b9-c317-4aa7-84c8-f201dbaa653d\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335332 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities\") pod \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\" (UID: \"bb75d997-dcba-4913-a3de-e5eeaa27ecbc\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.335375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities\") pod \"a8846911-0e30-4c42-9703-08f18f377ee4\" (UID: \"a8846911-0e30-4c42-9703-08f18f377ee4\") " Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.336533 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities" (OuterVolumeSpecName: "utilities") pod "a8846911-0e30-4c42-9703-08f18f377ee4" (UID: "a8846911-0e30-4c42-9703-08f18f377ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.336568 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities" (OuterVolumeSpecName: "utilities") pod "bb75d997-dcba-4913-a3de-e5eeaa27ecbc" (UID: "bb75d997-dcba-4913-a3de-e5eeaa27ecbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.336788 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.336805 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.336817 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.339543 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt" (OuterVolumeSpecName: "kube-api-access-j28lt") pod "d20b8171-f309-4ae0-bde8-55dfd73aa45f" (UID: "d20b8171-f309-4ae0-bde8-55dfd73aa45f"). InnerVolumeSpecName "kube-api-access-j28lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.343129 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a171a3b9-c317-4aa7-84c8-f201dbaa653d" (UID: "a171a3b9-c317-4aa7-84c8-f201dbaa653d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.344770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4" (OuterVolumeSpecName: "kube-api-access-tjjh4") pod "bb75d997-dcba-4913-a3de-e5eeaa27ecbc" (UID: "bb75d997-dcba-4913-a3de-e5eeaa27ecbc"). InnerVolumeSpecName "kube-api-access-tjjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.346935 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq" (OuterVolumeSpecName: "kube-api-access-tcpsq") pod "a8846911-0e30-4c42-9703-08f18f377ee4" (UID: "a8846911-0e30-4c42-9703-08f18f377ee4"). InnerVolumeSpecName "kube-api-access-tcpsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.347172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a171a3b9-c317-4aa7-84c8-f201dbaa653d" (UID: "a171a3b9-c317-4aa7-84c8-f201dbaa653d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.349811 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x" (OuterVolumeSpecName: "kube-api-access-b7n7x") pod "a171a3b9-c317-4aa7-84c8-f201dbaa653d" (UID: "a171a3b9-c317-4aa7-84c8-f201dbaa653d"). InnerVolumeSpecName "kube-api-access-b7n7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.352068 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d20b8171-f309-4ae0-bde8-55dfd73aa45f" (UID: "d20b8171-f309-4ae0-bde8-55dfd73aa45f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.390444 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8846911-0e30-4c42-9703-08f18f377ee4" (UID: "a8846911-0e30-4c42-9703-08f18f377ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.411884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb75d997-dcba-4913-a3de-e5eeaa27ecbc" (UID: "bb75d997-dcba-4913-a3de-e5eeaa27ecbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.437490 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20b8171-f309-4ae0-bde8-55dfd73aa45f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.437534 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjjh4\" (UniqueName: \"kubernetes.io/projected/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-kube-api-access-tjjh4\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.437552 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpsq\" (UniqueName: \"kubernetes.io/projected/a8846911-0e30-4c42-9703-08f18f377ee4-kube-api-access-tcpsq\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.437566 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb75d997-dcba-4913-a3de-e5eeaa27ecbc-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.440726 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7n7x\" (UniqueName: \"kubernetes.io/projected/a171a3b9-c317-4aa7-84c8-f201dbaa653d-kube-api-access-b7n7x\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.440794 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.440816 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j28lt\" (UniqueName: \"kubernetes.io/projected/d20b8171-f309-4ae0-bde8-55dfd73aa45f-kube-api-access-j28lt\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.440838 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8846911-0e30-4c42-9703-08f18f377ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.440852 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a171a3b9-c317-4aa7-84c8-f201dbaa653d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:20:11 crc kubenswrapper[4688]: I0930 17:20:11.461354 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ddkjs"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.032848 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28lnl" event={"ID":"bb75d997-dcba-4913-a3de-e5eeaa27ecbc","Type":"ContainerDied","Data":"ee2c8aafbd66bc4e6e9c0813d84ca92067518343d7dccdbc0ebe1ed953d0e2b0"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.032894 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28lnl" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.032959 4688 scope.go:117] "RemoveContainer" containerID="780bd376c7f6739ee92d44051aa2005d182dcf75b869158ff5b96a686cec9bb9" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.034385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" event={"ID":"f4669830-b238-42f8-8824-9aced2d9ca0a","Type":"ContainerStarted","Data":"21bb14874f4b66c1507755bc6634c6b5bff3fbfca4835fb1190335985fe550ac"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.034435 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" event={"ID":"f4669830-b238-42f8-8824-9aced2d9ca0a","Type":"ContainerStarted","Data":"694f0eb08907e876bfb98c0fba85b3defa1a6f3ff22df407f7a6b04a3a03f61f"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.034945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.037266 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.037309 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smnqd" event={"ID":"a171a3b9-c317-4aa7-84c8-f201dbaa653d","Type":"ContainerDied","Data":"56f731009baccebcabc8e4870c7a8475a1b0cad95efe174a530e0f2a700bc0f5"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.038488 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.041336 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w94j6" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.041356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w94j6" event={"ID":"d20b8171-f309-4ae0-bde8-55dfd73aa45f","Type":"ContainerDied","Data":"8e506d31fb9590e5e7d0b4badfb2ee3148941f3185cc46daf2bc5b9ed65127b3"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.044565 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fh8m7" event={"ID":"a8846911-0e30-4c42-9703-08f18f377ee4","Type":"ContainerDied","Data":"04213f1e3e4cf6e8079119a60efbf5ba210a2e7e469339429a440261d7f04b81"} Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.044612 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fh8m7" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.044641 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntjl4" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.048211 4688 scope.go:117] "RemoveContainer" containerID="8eecc5eb101c8821f30c22074ccf6650b9e61e83dd627205c50e58e3c9680dc5" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.069759 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ddkjs" podStartSLOduration=2.069725557 podStartE2EDuration="2.069725557s" podCreationTimestamp="2025-09-30 17:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:20:12.059096865 +0000 UTC m=+269.354534433" watchObservedRunningTime="2025-09-30 17:20:12.069725557 +0000 UTC m=+269.365163125" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.073905 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.082055 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w94j6"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.084482 4688 scope.go:117] "RemoveContainer" containerID="04f5269a0a69bacbd3d77adf6fbd98462b4cae7650fa75598b999fec771943c6" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.084547 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.087515 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smnqd"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.095641 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.097841 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ntjl4"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.102435 4688 scope.go:117] "RemoveContainer" containerID="d2da5fb020cdf094a438204b12d743f9c0766657ff6f7359abb619f324a5fdbe" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.121630 4688 scope.go:117] "RemoveContainer" containerID="579d598e333c06fac34e2d11259bb27b76fdc312a3f0ff980b2c357ceac5642d" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.130245 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.135086 4688 scope.go:117] "RemoveContainer" containerID="48776ebf1cfa042613d0e2bfe2818a05018a94da495145766959ead9d0269f57" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.138915 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fh8m7"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.145191 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.163788 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28lnl"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.166002 4688 scope.go:117] "RemoveContainer" containerID="f48aa37de1f30a6a662033f859ccf1460f8590431bc0496bcdf8229f7932a32e" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.191154 4688 scope.go:117] "RemoveContainer" containerID="b361802751ffc4faed583c6cee71382f3fd5eb2c8d428e74ab33e27909f7f55c" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.207961 4688 scope.go:117] "RemoveContainer" containerID="7d0dde013eec0dfd36850a9b5aef4057bdbe292c6aa5a7da7e25edd58c9a68c7" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.230544 4688 scope.go:117] "RemoveContainer" containerID="7e5c05f555fe0de5b429941e618e02bf39d6bda85472b89b4b551e8fc33d1d9a" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786283 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n8lw2"] Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786528 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786542 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786553 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786559 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786569 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786588 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786597 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786604 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786612 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786618 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786624 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786630 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786638 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786645 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786653 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786669 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786676 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="extract-utilities" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786690 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786695 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786702 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786708 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786720 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786726 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="extract-content" Sep 30 17:20:12 crc kubenswrapper[4688]: E0930 17:20:12.786734 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786740 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786848 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786860 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" containerName="marketplace-operator" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786868 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786875 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.786886 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" containerName="registry-server" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.787605 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.794408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.801441 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n8lw2"] Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.963340 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgh7\" (UniqueName: \"kubernetes.io/projected/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-kube-api-access-wsgh7\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.963445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-utilities\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.963530 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-catalog-content\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:12 crc kubenswrapper[4688]: I0930 17:20:12.997479 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjxnj"] Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.024361 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.025149 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjxnj"] Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.026806 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.064699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgh7\" (UniqueName: \"kubernetes.io/projected/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-kube-api-access-wsgh7\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.064784 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-utilities\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.064838 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-catalog-content\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.065396 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-catalog-content\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.068297 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-utilities\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.091918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgh7\" (UniqueName: \"kubernetes.io/projected/1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5-kube-api-access-wsgh7\") pod \"redhat-marketplace-n8lw2\" (UID: \"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5\") " pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.114357 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.165775 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-utilities\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.166795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-catalog-content\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.166871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqqjj\" (UniqueName: \"kubernetes.io/projected/558a9e5b-0ede-4aca-9dbd-560ea9578720-kube-api-access-kqqjj\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.269362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-utilities\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.269451 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-catalog-content\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.269478 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqqjj\" (UniqueName: \"kubernetes.io/projected/558a9e5b-0ede-4aca-9dbd-560ea9578720-kube-api-access-kqqjj\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.269981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-utilities\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.270075 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558a9e5b-0ede-4aca-9dbd-560ea9578720-catalog-content\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.293905 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqqjj\" (UniqueName: \"kubernetes.io/projected/558a9e5b-0ede-4aca-9dbd-560ea9578720-kube-api-access-kqqjj\") pod \"redhat-operators-wjxnj\" (UID: \"558a9e5b-0ede-4aca-9dbd-560ea9578720\") " pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.345230 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.365513 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n8lw2"] Sep 30 17:20:13 crc kubenswrapper[4688]: W0930 17:20:13.375909 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7d14e9_8e91_4e6e_9043_7cfdea4ebca5.slice/crio-c4f90b3caeb707172439e4eebfebfd252de1a3725c98c920640c5222542ba136 WatchSource:0}: Error finding container c4f90b3caeb707172439e4eebfebfd252de1a3725c98c920640c5222542ba136: Status 404 returned error can't find the container with id c4f90b3caeb707172439e4eebfebfd252de1a3725c98c920640c5222542ba136 Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.456206 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a171a3b9-c317-4aa7-84c8-f201dbaa653d" path="/var/lib/kubelet/pods/a171a3b9-c317-4aa7-84c8-f201dbaa653d/volumes" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.457193 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8846911-0e30-4c42-9703-08f18f377ee4" path="/var/lib/kubelet/pods/a8846911-0e30-4c42-9703-08f18f377ee4/volumes" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.457807 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb75d997-dcba-4913-a3de-e5eeaa27ecbc" path="/var/lib/kubelet/pods/bb75d997-dcba-4913-a3de-e5eeaa27ecbc/volumes" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.458862 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20b8171-f309-4ae0-bde8-55dfd73aa45f" path="/var/lib/kubelet/pods/d20b8171-f309-4ae0-bde8-55dfd73aa45f/volumes" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.459422 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56e08ff-acbe-425f-9570-8ce3a06208b5" path="/var/lib/kubelet/pods/f56e08ff-acbe-425f-9570-8ce3a06208b5/volumes" Sep 30 17:20:13 crc kubenswrapper[4688]: I0930 17:20:13.760835 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjxnj"] Sep 30 17:20:13 crc kubenswrapper[4688]: W0930 17:20:13.770253 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558a9e5b_0ede_4aca_9dbd_560ea9578720.slice/crio-6ea57bf058846f7bd0a7006a56d8ad5b7bf08bf9cfbda5fb75f135e413c7beb2 WatchSource:0}: Error finding container 6ea57bf058846f7bd0a7006a56d8ad5b7bf08bf9cfbda5fb75f135e413c7beb2: Status 404 returned error can't find the container with id 6ea57bf058846f7bd0a7006a56d8ad5b7bf08bf9cfbda5fb75f135e413c7beb2 Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.065904 4688 generic.go:334] "Generic (PLEG): container finished" podID="1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5" containerID="4b58dbf6683698c19dd62a5326917d568621a7c8724b99d0412f75dd82f10c16" exitCode=0 Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.065993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n8lw2" event={"ID":"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5","Type":"ContainerDied","Data":"4b58dbf6683698c19dd62a5326917d568621a7c8724b99d0412f75dd82f10c16"} Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.066066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n8lw2" event={"ID":"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5","Type":"ContainerStarted","Data":"c4f90b3caeb707172439e4eebfebfd252de1a3725c98c920640c5222542ba136"} Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.067767 4688 generic.go:334] "Generic (PLEG): container finished" podID="558a9e5b-0ede-4aca-9dbd-560ea9578720" containerID="683e669d57149be18841f6d5789d2d449dd595bf929972d28846fc3f9aa7c5f3" exitCode=0 Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.067913 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjxnj" event={"ID":"558a9e5b-0ede-4aca-9dbd-560ea9578720","Type":"ContainerDied","Data":"683e669d57149be18841f6d5789d2d449dd595bf929972d28846fc3f9aa7c5f3"} Sep 30 17:20:14 crc kubenswrapper[4688]: I0930 17:20:14.067993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjxnj" event={"ID":"558a9e5b-0ede-4aca-9dbd-560ea9578720","Type":"ContainerStarted","Data":"6ea57bf058846f7bd0a7006a56d8ad5b7bf08bf9cfbda5fb75f135e413c7beb2"} Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.077044 4688 generic.go:334] "Generic (PLEG): container finished" podID="1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5" containerID="95f8cb801794d4189a0b6fc5ab6e3bb0313444d438b1cacc5ecd0c6ab6bde4b7" exitCode=0 Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.077133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n8lw2" event={"ID":"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5","Type":"ContainerDied","Data":"95f8cb801794d4189a0b6fc5ab6e3bb0313444d438b1cacc5ecd0c6ab6bde4b7"} Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.189706 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dfp85"] Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.191172 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.194420 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.199405 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfp85"] Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.301888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-utilities\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.301936 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-catalog-content\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.301967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2pf\" (UniqueName: \"kubernetes.io/projected/437e2975-7afe-4f5b-9dbd-90467f7ccc49-kube-api-access-fk2pf\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.385039 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ndvz"] Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.386276 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.392792 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.403203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-utilities\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.403250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-catalog-content\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.403275 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk2pf\" (UniqueName: \"kubernetes.io/projected/437e2975-7afe-4f5b-9dbd-90467f7ccc49-kube-api-access-fk2pf\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.404051 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-catalog-content\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.405911 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/437e2975-7afe-4f5b-9dbd-90467f7ccc49-utilities\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.407078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ndvz"] Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.432993 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk2pf\" (UniqueName: \"kubernetes.io/projected/437e2975-7afe-4f5b-9dbd-90467f7ccc49-kube-api-access-fk2pf\") pod \"community-operators-dfp85\" (UID: \"437e2975-7afe-4f5b-9dbd-90467f7ccc49\") " pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.504623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-utilities\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.504878 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-catalog-content\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.505006 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7wd\" (UniqueName: \"kubernetes.io/projected/cd1d1772-cc14-4494-b99c-78ffa31903a0-kube-api-access-bb7wd\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.534326 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.606018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-catalog-content\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.606394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb7wd\" (UniqueName: \"kubernetes.io/projected/cd1d1772-cc14-4494-b99c-78ffa31903a0-kube-api-access-bb7wd\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.606458 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-utilities\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.607377 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-utilities\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.608266 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1d1772-cc14-4494-b99c-78ffa31903a0-catalog-content\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.629724 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb7wd\" (UniqueName: \"kubernetes.io/projected/cd1d1772-cc14-4494-b99c-78ffa31903a0-kube-api-access-bb7wd\") pod \"certified-operators-2ndvz\" (UID: \"cd1d1772-cc14-4494-b99c-78ffa31903a0\") " pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.710931 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:15 crc kubenswrapper[4688]: I0930 17:20:15.790632 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfp85"] Sep 30 17:20:16 crc kubenswrapper[4688]: I0930 17:20:16.092359 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfp85" event={"ID":"437e2975-7afe-4f5b-9dbd-90467f7ccc49","Type":"ContainerStarted","Data":"07da688951359500df5bb06002b77d109d5f25abc5b4048e35b0043469f817c9"} Sep 30 17:20:16 crc kubenswrapper[4688]: I0930 17:20:16.092835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfp85" event={"ID":"437e2975-7afe-4f5b-9dbd-90467f7ccc49","Type":"ContainerStarted","Data":"4ff80b3d8093beb58f2cc8149ea0845419d0bbaa176d91b334f9a21172efde41"} Sep 30 17:20:16 crc kubenswrapper[4688]: I0930 17:20:16.095404 4688 generic.go:334] "Generic (PLEG): container finished" podID="558a9e5b-0ede-4aca-9dbd-560ea9578720" containerID="97c154fd2ef0f4739139c03fa158f4db0de4e42fb518a84b5fc2aa86e27e3845" exitCode=0 Sep 30 17:20:16 crc kubenswrapper[4688]: I0930 17:20:16.095487 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjxnj" event={"ID":"558a9e5b-0ede-4aca-9dbd-560ea9578720","Type":"ContainerDied","Data":"97c154fd2ef0f4739139c03fa158f4db0de4e42fb518a84b5fc2aa86e27e3845"} Sep 30 17:20:16 crc kubenswrapper[4688]: I0930 17:20:16.134033 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ndvz"] Sep 30 17:20:16 crc kubenswrapper[4688]: W0930 17:20:16.149198 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd1d1772_cc14_4494_b99c_78ffa31903a0.slice/crio-ce052c6bf59c86328815f678f356c91c8b29c22027f1cfe71191913766d35fac WatchSource:0}: Error finding container ce052c6bf59c86328815f678f356c91c8b29c22027f1cfe71191913766d35fac: Status 404 returned error can't find the container with id ce052c6bf59c86328815f678f356c91c8b29c22027f1cfe71191913766d35fac Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.111773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjxnj" event={"ID":"558a9e5b-0ede-4aca-9dbd-560ea9578720","Type":"ContainerStarted","Data":"63f3160e2a1bbdc4970d52a8d4c5b53964865f6ecd642de1369ec431e07d8a5b"} Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.114544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n8lw2" event={"ID":"1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5","Type":"ContainerStarted","Data":"908f218ec92a55e034146d5c9b8c630d60c694a5d4430f0b8db2303e30c23bcc"} Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.116094 4688 generic.go:334] "Generic (PLEG): container finished" podID="cd1d1772-cc14-4494-b99c-78ffa31903a0" containerID="aee9d19491663bb096e91710e8ba22b4bb3d333d66bfc16e20f384d7229b6a15" exitCode=0 Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.116183 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ndvz" event={"ID":"cd1d1772-cc14-4494-b99c-78ffa31903a0","Type":"ContainerDied","Data":"aee9d19491663bb096e91710e8ba22b4bb3d333d66bfc16e20f384d7229b6a15"} Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.116219 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ndvz" event={"ID":"cd1d1772-cc14-4494-b99c-78ffa31903a0","Type":"ContainerStarted","Data":"ce052c6bf59c86328815f678f356c91c8b29c22027f1cfe71191913766d35fac"} Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.117813 4688 generic.go:334] "Generic (PLEG): container finished" podID="437e2975-7afe-4f5b-9dbd-90467f7ccc49" containerID="07da688951359500df5bb06002b77d109d5f25abc5b4048e35b0043469f817c9" exitCode=0 Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.117841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfp85" event={"ID":"437e2975-7afe-4f5b-9dbd-90467f7ccc49","Type":"ContainerDied","Data":"07da688951359500df5bb06002b77d109d5f25abc5b4048e35b0043469f817c9"} Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.132489 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjxnj" podStartSLOduration=2.694667239 podStartE2EDuration="5.132466627s" podCreationTimestamp="2025-09-30 17:20:12 +0000 UTC" firstStartedPulling="2025-09-30 17:20:14.069543702 +0000 UTC m=+271.364981260" lastFinishedPulling="2025-09-30 17:20:16.50734309 +0000 UTC m=+273.802780648" observedRunningTime="2025-09-30 17:20:17.131021448 +0000 UTC m=+274.426458996" watchObservedRunningTime="2025-09-30 17:20:17.132466627 +0000 UTC m=+274.427904185" Sep 30 17:20:17 crc kubenswrapper[4688]: I0930 17:20:17.154921 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n8lw2" podStartSLOduration=3.316474306 podStartE2EDuration="5.154896822s" podCreationTimestamp="2025-09-30 17:20:12 +0000 UTC" firstStartedPulling="2025-09-30 17:20:14.067779915 +0000 UTC m=+271.363217473" lastFinishedPulling="2025-09-30 17:20:15.906202411 +0000 UTC m=+273.201639989" observedRunningTime="2025-09-30 17:20:17.151947684 +0000 UTC m=+274.447385262" watchObservedRunningTime="2025-09-30 17:20:17.154896822 +0000 UTC m=+274.450334370" Sep 30 17:20:19 crc kubenswrapper[4688]: I0930 17:20:19.132081 4688 generic.go:334] "Generic (PLEG): container finished" podID="cd1d1772-cc14-4494-b99c-78ffa31903a0" containerID="58b9225a4b81efa9368d501bde85e1b001d1ffbac695caf947bb9abf059aab8f" exitCode=0 Sep 30 17:20:19 crc kubenswrapper[4688]: I0930 17:20:19.132204 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ndvz" event={"ID":"cd1d1772-cc14-4494-b99c-78ffa31903a0","Type":"ContainerDied","Data":"58b9225a4b81efa9368d501bde85e1b001d1ffbac695caf947bb9abf059aab8f"} Sep 30 17:20:19 crc kubenswrapper[4688]: I0930 17:20:19.135601 4688 generic.go:334] "Generic (PLEG): container finished" podID="437e2975-7afe-4f5b-9dbd-90467f7ccc49" containerID="f0a1739b0e5f5b1fbe717ab8b05360edf1d6e8546a0812146bdb777593fa030d" exitCode=0 Sep 30 17:20:19 crc kubenswrapper[4688]: I0930 17:20:19.135648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfp85" event={"ID":"437e2975-7afe-4f5b-9dbd-90467f7ccc49","Type":"ContainerDied","Data":"f0a1739b0e5f5b1fbe717ab8b05360edf1d6e8546a0812146bdb777593fa030d"} Sep 30 17:20:22 crc kubenswrapper[4688]: I0930 17:20:22.154706 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfp85" event={"ID":"437e2975-7afe-4f5b-9dbd-90467f7ccc49","Type":"ContainerStarted","Data":"c60d9d642a92b5e6acf972bf5ac5c44c64234700da5c5fee6f7911a09f291fba"} Sep 30 17:20:22 crc kubenswrapper[4688]: I0930 17:20:22.156613 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ndvz" event={"ID":"cd1d1772-cc14-4494-b99c-78ffa31903a0","Type":"ContainerStarted","Data":"3972f2328f3581dae4931b5389d43d1a0356d060e1fee98dee1becb86b9d759c"} Sep 30 17:20:22 crc kubenswrapper[4688]: I0930 17:20:22.180097 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ndvz" podStartSLOduration=3.264255512 podStartE2EDuration="7.180066103s" podCreationTimestamp="2025-09-30 17:20:15 +0000 UTC" firstStartedPulling="2025-09-30 17:20:17.11827875 +0000 UTC m=+274.413716298" lastFinishedPulling="2025-09-30 17:20:21.034089331 +0000 UTC m=+278.329526889" observedRunningTime="2025-09-30 17:20:22.173634462 +0000 UTC m=+279.469072040" watchObservedRunningTime="2025-09-30 17:20:22.180066103 +0000 UTC m=+279.475503701" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.115095 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.116807 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.155874 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.192487 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dfp85" podStartSLOduration=3.503461386 podStartE2EDuration="8.192462327s" podCreationTimestamp="2025-09-30 17:20:15 +0000 UTC" firstStartedPulling="2025-09-30 17:20:17.119515373 +0000 UTC m=+274.414952931" lastFinishedPulling="2025-09-30 17:20:21.808516324 +0000 UTC m=+279.103953872" observedRunningTime="2025-09-30 17:20:23.188841361 +0000 UTC m=+280.484278919" watchObservedRunningTime="2025-09-30 17:20:23.192462327 +0000 UTC m=+280.487899885" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.204481 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n8lw2" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.345949 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.346740 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:23 crc kubenswrapper[4688]: I0930 17:20:23.386542 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:24 crc kubenswrapper[4688]: I0930 17:20:24.212765 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjxnj" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.534863 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.534914 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.577949 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.712867 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.712964 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:25 crc kubenswrapper[4688]: I0930 17:20:25.764555 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:26 crc kubenswrapper[4688]: I0930 17:20:26.220815 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ndvz" Sep 30 17:20:35 crc kubenswrapper[4688]: I0930 17:20:35.576835 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dfp85" Sep 30 17:21:52 crc kubenswrapper[4688]: I0930 17:21:52.546205 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:21:52 crc kubenswrapper[4688]: I0930 17:21:52.547008 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:22:22 crc kubenswrapper[4688]: I0930 17:22:22.545693 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:22:22 crc kubenswrapper[4688]: I0930 17:22:22.546415 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:22:52 crc kubenswrapper[4688]: I0930 17:22:52.545923 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:22:52 crc kubenswrapper[4688]: I0930 17:22:52.546754 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:22:52 crc kubenswrapper[4688]: I0930 17:22:52.546847 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:22:52 crc kubenswrapper[4688]: I0930 17:22:52.547720 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:22:52 crc kubenswrapper[4688]: I0930 17:22:52.547859 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0" gracePeriod=600 Sep 30 17:22:53 crc kubenswrapper[4688]: I0930 17:22:53.118389 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0" exitCode=0 Sep 30 17:22:53 crc kubenswrapper[4688]: I0930 17:22:53.118495 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0"} Sep 30 17:22:53 crc kubenswrapper[4688]: I0930 17:22:53.118931 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434"} Sep 30 17:22:53 crc kubenswrapper[4688]: I0930 17:22:53.118964 4688 scope.go:117] "RemoveContainer" containerID="8c23c9230c08dd863352fd2693492b3e6e5b79411b7d1df7ceae38ecfa8c4568" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.091889 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7xctq"] Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.093438 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.151409 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7xctq"] Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292535 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-trusted-ca\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292647 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p84qb\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-kube-api-access-p84qb\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292699 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-bound-sa-token\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292760 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292789 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-certificates\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292933 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-tls\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.292974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.319448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-certificates\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-tls\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394516 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394645 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-trusted-ca\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394706 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p84qb\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-kube-api-access-p84qb\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.394782 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-bound-sa-token\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.395529 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.397682 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-certificates\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.398360 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-trusted-ca\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.404256 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.405881 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-registry-tls\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.416982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-bound-sa-token\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.418352 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p84qb\" (UniqueName: \"kubernetes.io/projected/ff0df80c-2ec8-4ecd-89b3-3f53a6960275-kube-api-access-p84qb\") pod \"image-registry-66df7c8f76-7xctq\" (UID: \"ff0df80c-2ec8-4ecd-89b3-3f53a6960275\") " pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.710730 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:38 crc kubenswrapper[4688]: I0930 17:23:38.956779 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7xctq"] Sep 30 17:23:39 crc kubenswrapper[4688]: I0930 17:23:39.419501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" event={"ID":"ff0df80c-2ec8-4ecd-89b3-3f53a6960275","Type":"ContainerStarted","Data":"0f7951b63bde01f63479b11a8e984267c9ec17e831ad81f3e50f90ce0cc6fa0f"} Sep 30 17:23:39 crc kubenswrapper[4688]: I0930 17:23:39.419880 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" event={"ID":"ff0df80c-2ec8-4ecd-89b3-3f53a6960275","Type":"ContainerStarted","Data":"31ac72a7cb2885939a08b56628fb16cbeb5ab99c19ceceb79209c5f095dee13b"} Sep 30 17:23:39 crc kubenswrapper[4688]: I0930 17:23:39.419904 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:39 crc kubenswrapper[4688]: I0930 17:23:39.462221 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" podStartSLOduration=1.462184665 podStartE2EDuration="1.462184665s" podCreationTimestamp="2025-09-30 17:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:23:39.448704077 +0000 UTC m=+476.744141645" watchObservedRunningTime="2025-09-30 17:23:39.462184665 +0000 UTC m=+476.757622233" Sep 30 17:23:58 crc kubenswrapper[4688]: I0930 17:23:58.716391 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7xctq" Sep 30 17:23:58 crc kubenswrapper[4688]: I0930 17:23:58.790027 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:24:23 crc kubenswrapper[4688]: I0930 17:24:23.867623 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" podUID="14d12c61-000b-4766-93ff-ae30454582a7" containerName="registry" containerID="cri-o://f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991" gracePeriod=30 Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.071627 4688 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-85wjs container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.23:5000/healthz\": dial tcp 10.217.0.23:5000: connect: connection refused" start-of-body= Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.071707 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" podUID="14d12c61-000b-4766-93ff-ae30454582a7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.23:5000/healthz\": dial tcp 10.217.0.23:5000: connect: connection refused" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.228549 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333014 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333231 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333292 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333317 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333374 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333422 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333461 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ggg4\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.333479 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token\") pod \"14d12c61-000b-4766-93ff-ae30454582a7\" (UID: \"14d12c61-000b-4766-93ff-ae30454582a7\") " Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.334712 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.336159 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.341164 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.341938 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4" (OuterVolumeSpecName: "kube-api-access-6ggg4") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "kube-api-access-6ggg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.342559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.344169 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.348905 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.351936 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "14d12c61-000b-4766-93ff-ae30454582a7" (UID: "14d12c61-000b-4766-93ff-ae30454582a7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437780 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437835 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/14d12c61-000b-4766-93ff-ae30454582a7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437856 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/14d12c61-000b-4766-93ff-ae30454582a7-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437871 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437886 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/14d12c61-000b-4766-93ff-ae30454582a7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437896 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ggg4\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-kube-api-access-6ggg4\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.437905 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14d12c61-000b-4766-93ff-ae30454582a7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.704992 4688 generic.go:334] "Generic (PLEG): container finished" podID="14d12c61-000b-4766-93ff-ae30454582a7" containerID="f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991" exitCode=0 Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.705042 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" event={"ID":"14d12c61-000b-4766-93ff-ae30454582a7","Type":"ContainerDied","Data":"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991"} Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.705089 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.705115 4688 scope.go:117] "RemoveContainer" containerID="f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.705101 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-85wjs" event={"ID":"14d12c61-000b-4766-93ff-ae30454582a7","Type":"ContainerDied","Data":"d3de27128303b5d5d9b0e04b19d1fe23d8df501db7a5d0fb5935b7e2410a1b92"} Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.724058 4688 scope.go:117] "RemoveContainer" containerID="f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991" Sep 30 17:24:24 crc kubenswrapper[4688]: E0930 17:24:24.725024 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991\": container with ID starting with f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991 not found: ID does not exist" containerID="f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.725078 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991"} err="failed to get container status \"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991\": rpc error: code = NotFound desc = could not find container \"f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991\": container with ID starting with f4adc14dee26c7e038ca371a381de70581d74ef1c6a6ddb00db52c01d2474991 not found: ID does not exist" Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.745480 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:24:24 crc kubenswrapper[4688]: I0930 17:24:24.750452 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-85wjs"] Sep 30 17:24:25 crc kubenswrapper[4688]: I0930 17:24:25.438620 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d12c61-000b-4766-93ff-ae30454582a7" path="/var/lib/kubelet/pods/14d12c61-000b-4766-93ff-ae30454582a7/volumes" Sep 30 17:24:43 crc kubenswrapper[4688]: I0930 17:24:43.546631 4688 scope.go:117] "RemoveContainer" containerID="48204aa2ef67a355e95893512288e73600e0540dc65a475b0ec4c5b350969a98" Sep 30 17:24:52 crc kubenswrapper[4688]: I0930 17:24:52.545622 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:24:52 crc kubenswrapper[4688]: I0930 17:24:52.546329 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:22 crc kubenswrapper[4688]: I0930 17:25:22.546131 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:25:22 crc kubenswrapper[4688]: I0930 17:25:22.547079 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:43 crc kubenswrapper[4688]: I0930 17:25:43.589840 4688 scope.go:117] "RemoveContainer" containerID="e3a7855486813cfd84ccc7bf315d1e2d3f1833472cae37f7d9714c219925ec6b" Sep 30 17:25:43 crc kubenswrapper[4688]: I0930 17:25:43.612464 4688 scope.go:117] "RemoveContainer" containerID="85ab9e7789aa905ea8391b9c816ad937fa812fb8a9ee67a44b1005e761e4bd02" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.947292 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jzzz7"] Sep 30 17:25:46 crc kubenswrapper[4688]: E0930 17:25:46.948040 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d12c61-000b-4766-93ff-ae30454582a7" containerName="registry" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.948058 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d12c61-000b-4766-93ff-ae30454582a7" containerName="registry" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.948191 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d12c61-000b-4766-93ff-ae30454582a7" containerName="registry" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.948707 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.952702 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgz84"] Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.953676 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wgz84" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.954653 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rz8rf" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.954953 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.955275 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.957216 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-h6gwp" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.966120 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jzzz7"] Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.970933 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgz84"] Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.982589 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-tj7k4"] Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.984020 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:46 crc kubenswrapper[4688]: I0930 17:25:46.986035 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-b4k6t" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.001990 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-tj7k4"] Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.034103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptsw\" (UniqueName: \"kubernetes.io/projected/f6514521-cd09-4a1e-b627-62e0703af4e6-kube-api-access-rptsw\") pod \"cert-manager-cainjector-7f985d654d-jzzz7\" (UID: \"f6514521-cd09-4a1e-b627-62e0703af4e6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.135934 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptsw\" (UniqueName: \"kubernetes.io/projected/f6514521-cd09-4a1e-b627-62e0703af4e6-kube-api-access-rptsw\") pod \"cert-manager-cainjector-7f985d654d-jzzz7\" (UID: \"f6514521-cd09-4a1e-b627-62e0703af4e6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.136316 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvf2\" (UniqueName: \"kubernetes.io/projected/cf0898fe-428e-49df-bafd-a45e3f38d476-kube-api-access-gtvf2\") pod \"cert-manager-webhook-5655c58dd6-tj7k4\" (UID: \"cf0898fe-428e-49df-bafd-a45e3f38d476\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.136422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltw8j\" (UniqueName: \"kubernetes.io/projected/c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15-kube-api-access-ltw8j\") pod \"cert-manager-5b446d88c5-wgz84\" (UID: \"c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15\") " pod="cert-manager/cert-manager-5b446d88c5-wgz84" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.156159 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptsw\" (UniqueName: \"kubernetes.io/projected/f6514521-cd09-4a1e-b627-62e0703af4e6-kube-api-access-rptsw\") pod \"cert-manager-cainjector-7f985d654d-jzzz7\" (UID: \"f6514521-cd09-4a1e-b627-62e0703af4e6\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.237474 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltw8j\" (UniqueName: \"kubernetes.io/projected/c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15-kube-api-access-ltw8j\") pod \"cert-manager-5b446d88c5-wgz84\" (UID: \"c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15\") " pod="cert-manager/cert-manager-5b446d88c5-wgz84" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.237553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtvf2\" (UniqueName: \"kubernetes.io/projected/cf0898fe-428e-49df-bafd-a45e3f38d476-kube-api-access-gtvf2\") pod \"cert-manager-webhook-5655c58dd6-tj7k4\" (UID: \"cf0898fe-428e-49df-bafd-a45e3f38d476\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.256476 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtvf2\" (UniqueName: \"kubernetes.io/projected/cf0898fe-428e-49df-bafd-a45e3f38d476-kube-api-access-gtvf2\") pod \"cert-manager-webhook-5655c58dd6-tj7k4\" (UID: \"cf0898fe-428e-49df-bafd-a45e3f38d476\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.259952 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltw8j\" (UniqueName: \"kubernetes.io/projected/c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15-kube-api-access-ltw8j\") pod \"cert-manager-5b446d88c5-wgz84\" (UID: \"c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15\") " pod="cert-manager/cert-manager-5b446d88c5-wgz84" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.268914 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.283814 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wgz84" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.298535 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.546320 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-jzzz7"] Sep 30 17:25:47 crc kubenswrapper[4688]: W0930 17:25:47.559272 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6514521_cd09_4a1e_b627_62e0703af4e6.slice/crio-85fd5380edf798ed8b9aaf1606e3e35408728f6f3656130dce2e0ad366bdc57e WatchSource:0}: Error finding container 85fd5380edf798ed8b9aaf1606e3e35408728f6f3656130dce2e0ad366bdc57e: Status 404 returned error can't find the container with id 85fd5380edf798ed8b9aaf1606e3e35408728f6f3656130dce2e0ad366bdc57e Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.565844 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.585047 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgz84"] Sep 30 17:25:47 crc kubenswrapper[4688]: I0930 17:25:47.621457 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-tj7k4"] Sep 30 17:25:47 crc kubenswrapper[4688]: W0930 17:25:47.625026 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0898fe_428e_49df_bafd_a45e3f38d476.slice/crio-0098db3445daef039fdc109fb7b0791d6f1b7bd6427d2b58ba772c0680dc19ea WatchSource:0}: Error finding container 0098db3445daef039fdc109fb7b0791d6f1b7bd6427d2b58ba772c0680dc19ea: Status 404 returned error can't find the container with id 0098db3445daef039fdc109fb7b0791d6f1b7bd6427d2b58ba772c0680dc19ea Sep 30 17:25:48 crc kubenswrapper[4688]: I0930 17:25:48.217550 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" event={"ID":"f6514521-cd09-4a1e-b627-62e0703af4e6","Type":"ContainerStarted","Data":"85fd5380edf798ed8b9aaf1606e3e35408728f6f3656130dce2e0ad366bdc57e"} Sep 30 17:25:48 crc kubenswrapper[4688]: I0930 17:25:48.220505 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wgz84" event={"ID":"c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15","Type":"ContainerStarted","Data":"3ed1a6ffdbf69cd1e149cd69abf74b971d34b04356910c92df4852c4a984f5fd"} Sep 30 17:25:48 crc kubenswrapper[4688]: I0930 17:25:48.221460 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" event={"ID":"cf0898fe-428e-49df-bafd-a45e3f38d476","Type":"ContainerStarted","Data":"0098db3445daef039fdc109fb7b0791d6f1b7bd6427d2b58ba772c0680dc19ea"} Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.247591 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wgz84" event={"ID":"c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15","Type":"ContainerStarted","Data":"42148c0a3147171558debe23d2eb395664f4becaa01632031cd65c79079772cf"} Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.249811 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" event={"ID":"cf0898fe-428e-49df-bafd-a45e3f38d476","Type":"ContainerStarted","Data":"9261b4ba524f523ccb80c0c85d34383d84769ee84860a8736ce164fac8a1d3b8"} Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.249965 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.251712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" event={"ID":"f6514521-cd09-4a1e-b627-62e0703af4e6","Type":"ContainerStarted","Data":"23c103735c68b7b031032533d6e2812f3b578f55f186ba580910d61240ce40e8"} Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.270978 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-wgz84" podStartSLOduration=2.490894517 podStartE2EDuration="6.270921424s" podCreationTimestamp="2025-09-30 17:25:46 +0000 UTC" firstStartedPulling="2025-09-30 17:25:47.59311446 +0000 UTC m=+604.888552018" lastFinishedPulling="2025-09-30 17:25:51.373141367 +0000 UTC m=+608.668578925" observedRunningTime="2025-09-30 17:25:52.270214305 +0000 UTC m=+609.565651863" watchObservedRunningTime="2025-09-30 17:25:52.270921424 +0000 UTC m=+609.566358982" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.288231 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" podStartSLOduration=2.641854727 podStartE2EDuration="6.28820366s" podCreationTimestamp="2025-09-30 17:25:46 +0000 UTC" firstStartedPulling="2025-09-30 17:25:47.632399806 +0000 UTC m=+604.927837364" lastFinishedPulling="2025-09-30 17:25:51.278748719 +0000 UTC m=+608.574186297" observedRunningTime="2025-09-30 17:25:52.287952524 +0000 UTC m=+609.583390082" watchObservedRunningTime="2025-09-30 17:25:52.28820366 +0000 UTC m=+609.583641218" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.302695 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-jzzz7" podStartSLOduration=2.583293092 podStartE2EDuration="6.302673684s" podCreationTimestamp="2025-09-30 17:25:46 +0000 UTC" firstStartedPulling="2025-09-30 17:25:47.565635799 +0000 UTC m=+604.861073357" lastFinishedPulling="2025-09-30 17:25:51.285016391 +0000 UTC m=+608.580453949" observedRunningTime="2025-09-30 17:25:52.301246887 +0000 UTC m=+609.596684475" watchObservedRunningTime="2025-09-30 17:25:52.302673684 +0000 UTC m=+609.598111242" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.546229 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.546320 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.546385 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.548308 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:25:52 crc kubenswrapper[4688]: I0930 17:25:52.548498 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434" gracePeriod=600 Sep 30 17:25:53 crc kubenswrapper[4688]: I0930 17:25:53.260325 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434" exitCode=0 Sep 30 17:25:53 crc kubenswrapper[4688]: I0930 17:25:53.260372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434"} Sep 30 17:25:53 crc kubenswrapper[4688]: I0930 17:25:53.260825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00"} Sep 30 17:25:53 crc kubenswrapper[4688]: I0930 17:25:53.260845 4688 scope.go:117] "RemoveContainer" containerID="b04a12f996787b13e1d57cd8e507854626f736e1ad7221732b79722b6885a9e0" Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.301526 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-tj7k4" Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.494542 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rwctq"] Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495105 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-controller" containerID="cri-o://47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495254 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="northd" containerID="cri-o://d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495227 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="nbdb" containerID="cri-o://189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495317 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-node" containerID="cri-o://0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495356 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-acl-logging" containerID="cri-o://ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495302 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.495773 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="sbdb" containerID="cri-o://38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" gracePeriod=30 Sep 30 17:25:57 crc kubenswrapper[4688]: I0930 17:25:57.530685 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" containerID="cri-o://d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" gracePeriod=30 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.253665 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/3.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.256594 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovn-acl-logging/0.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.257199 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovn-controller/0.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.257805 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.298794 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovnkube-controller/3.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.300825 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovn-acl-logging/0.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301317 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rwctq_cb5dfa5a-bc52-43b4-a301-730ccb234480/ovn-controller/0.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301684 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301718 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301730 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301741 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301749 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301757 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" exitCode=0 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301765 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" exitCode=143 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301774 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" exitCode=143 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301834 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301885 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301917 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301930 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301941 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301947 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301952 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301958 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301963 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301968 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301973 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301979 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301987 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.301995 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302002 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302008 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302015 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302021 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302026 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302032 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302037 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302044 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302049 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302057 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302064 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302071 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302076 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302082 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302087 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302093 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302099 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302105 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302110 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302117 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" event={"ID":"cb5dfa5a-bc52-43b4-a301-730ccb234480","Type":"ContainerDied","Data":"12d95964b85acd8483f3bf0607468716c1a05281fc9b387ce2c6ed66e00fb8c2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302133 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302141 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302147 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302188 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302198 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302206 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302215 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302224 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302232 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302239 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302258 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.302447 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rwctq" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.305830 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/2.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.306282 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/1.log" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.306319 4688 generic.go:334] "Generic (PLEG): container finished" podID="03cb23bc-dcc1-46e7-b238-67a6163f456d" containerID="52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776" exitCode=2 Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.306341 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerDied","Data":"52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.306357 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1"} Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.307379 4688 scope.go:117] "RemoveContainer" containerID="52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.307679 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9wbfw_openshift-multus(03cb23bc-dcc1-46e7-b238-67a6163f456d)\"" pod="openshift-multus/multus-9wbfw" podUID="03cb23bc-dcc1-46e7-b238-67a6163f456d" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.326043 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.327525 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q74ql"] Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328161 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kubecfg-setup" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328185 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kubecfg-setup" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328201 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="northd" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328217 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="northd" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328233 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328242 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328251 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="nbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328259 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="nbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328271 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328281 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328293 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328301 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328316 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="sbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328323 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="sbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328337 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328345 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328355 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328363 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328373 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328381 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328398 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-node" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328407 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-node" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328418 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-acl-logging" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328426 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-acl-logging" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328691 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-node" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328708 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328719 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-acl-logging" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328726 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328735 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovn-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328743 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="northd" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328754 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328762 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="nbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328772 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328778 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328785 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="sbdb" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.328884 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328891 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.328988 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" containerName="ovnkube-controller" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.332224 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.354609 4688 scope.go:117] "RemoveContainer" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.382719 4688 scope.go:117] "RemoveContainer" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.399185 4688 scope.go:117] "RemoveContainer" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.414691 4688 scope.go:117] "RemoveContainer" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.424813 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.424895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.424930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log" (OuterVolumeSpecName: "node-log") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425002 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425043 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425152 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425427 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425427 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425538 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425579 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425607 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425634 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425677 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425693 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425731 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425748 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425765 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425782 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5jsc\" (UniqueName: \"kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425803 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425817 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd\") pod \"cb5dfa5a-bc52-43b4-a301-730ccb234480\" (UID: \"cb5dfa5a-bc52-43b4-a301-730ccb234480\") " Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425963 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-kubelet\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425985 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-netd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-log-socket\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426019 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-node-log\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-systemd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-env-overrides\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-netns\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426137 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-ovn\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426153 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-config\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-bin\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2rk\" (UniqueName: \"kubernetes.io/projected/4f42cb8a-8540-4662-9f86-de080571eb43-kube-api-access-4f2rk\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-slash\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426275 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f42cb8a-8540-4662-9f86-de080571eb43-ovn-node-metrics-cert\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-var-lib-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426325 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-etc-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426340 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-script-lib\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426357 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-systemd-units\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426389 4688 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-node-log\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426399 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426412 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426423 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425749 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425763 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425777 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425791 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425810 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425822 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.425835 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.426139 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.427276 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.427356 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket" (OuterVolumeSpecName: "log-socket") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.427382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash" (OuterVolumeSpecName: "host-slash") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.427700 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.429224 4688 scope.go:117] "RemoveContainer" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.433421 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.433510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc" (OuterVolumeSpecName: "kube-api-access-n5jsc") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "kube-api-access-n5jsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.441409 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cb5dfa5a-bc52-43b4-a301-730ccb234480" (UID: "cb5dfa5a-bc52-43b4-a301-730ccb234480"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.443802 4688 scope.go:117] "RemoveContainer" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.460103 4688 scope.go:117] "RemoveContainer" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.477196 4688 scope.go:117] "RemoveContainer" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.503315 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.503811 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.503857 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} err="failed to get container status \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.503885 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.504239 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": container with ID starting with 0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f not found: ID does not exist" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.504306 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} err="failed to get container status \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": rpc error: code = NotFound desc = could not find container \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": container with ID starting with 0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.504345 4688 scope.go:117] "RemoveContainer" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.504786 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": container with ID starting with 38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5 not found: ID does not exist" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.504819 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} err="failed to get container status \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": rpc error: code = NotFound desc = could not find container \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": container with ID starting with 38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.504839 4688 scope.go:117] "RemoveContainer" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.505325 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": container with ID starting with 189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85 not found: ID does not exist" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.505360 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} err="failed to get container status \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": rpc error: code = NotFound desc = could not find container \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": container with ID starting with 189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.505378 4688 scope.go:117] "RemoveContainer" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.506882 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": container with ID starting with d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9 not found: ID does not exist" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.506917 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} err="failed to get container status \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": rpc error: code = NotFound desc = could not find container \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": container with ID starting with d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.506945 4688 scope.go:117] "RemoveContainer" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.507380 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": container with ID starting with 134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4 not found: ID does not exist" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.507414 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} err="failed to get container status \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": rpc error: code = NotFound desc = could not find container \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": container with ID starting with 134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.507434 4688 scope.go:117] "RemoveContainer" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.507772 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": container with ID starting with 0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f not found: ID does not exist" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.507806 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} err="failed to get container status \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": rpc error: code = NotFound desc = could not find container \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": container with ID starting with 0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.507827 4688 scope.go:117] "RemoveContainer" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.508278 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": container with ID starting with ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2 not found: ID does not exist" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.508318 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} err="failed to get container status \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": rpc error: code = NotFound desc = could not find container \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": container with ID starting with ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.508338 4688 scope.go:117] "RemoveContainer" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.508693 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": container with ID starting with 47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2 not found: ID does not exist" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.508724 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} err="failed to get container status \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": rpc error: code = NotFound desc = could not find container \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": container with ID starting with 47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.508748 4688 scope.go:117] "RemoveContainer" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: E0930 17:25:58.509281 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": container with ID starting with fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8 not found: ID does not exist" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.509307 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} err="failed to get container status \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": rpc error: code = NotFound desc = could not find container \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": container with ID starting with fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.509324 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.509679 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} err="failed to get container status \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.509709 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510023 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} err="failed to get container status \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": rpc error: code = NotFound desc = could not find container \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": container with ID starting with 0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510054 4688 scope.go:117] "RemoveContainer" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510394 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} err="failed to get container status \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": rpc error: code = NotFound desc = could not find container \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": container with ID starting with 38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510422 4688 scope.go:117] "RemoveContainer" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510746 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} err="failed to get container status \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": rpc error: code = NotFound desc = could not find container \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": container with ID starting with 189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.510771 4688 scope.go:117] "RemoveContainer" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511092 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} err="failed to get container status \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": rpc error: code = NotFound desc = could not find container \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": container with ID starting with d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511116 4688 scope.go:117] "RemoveContainer" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511396 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} err="failed to get container status \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": rpc error: code = NotFound desc = could not find container \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": container with ID starting with 134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511420 4688 scope.go:117] "RemoveContainer" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511714 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} err="failed to get container status \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": rpc error: code = NotFound desc = could not find container \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": container with ID starting with 0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.511740 4688 scope.go:117] "RemoveContainer" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512157 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} err="failed to get container status \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": rpc error: code = NotFound desc = could not find container \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": container with ID starting with ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512183 4688 scope.go:117] "RemoveContainer" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512666 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} err="failed to get container status \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": rpc error: code = NotFound desc = could not find container \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": container with ID starting with 47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512691 4688 scope.go:117] "RemoveContainer" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512960 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} err="failed to get container status \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": rpc error: code = NotFound desc = could not find container \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": container with ID starting with fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.512983 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513313 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} err="failed to get container status \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513353 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513724 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} err="failed to get container status \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": rpc error: code = NotFound desc = could not find container \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": container with ID starting with 0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513749 4688 scope.go:117] "RemoveContainer" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513976 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} err="failed to get container status \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": rpc error: code = NotFound desc = could not find container \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": container with ID starting with 38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.513999 4688 scope.go:117] "RemoveContainer" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514318 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} err="failed to get container status \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": rpc error: code = NotFound desc = could not find container \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": container with ID starting with 189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514341 4688 scope.go:117] "RemoveContainer" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514551 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} err="failed to get container status \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": rpc error: code = NotFound desc = could not find container \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": container with ID starting with d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514619 4688 scope.go:117] "RemoveContainer" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514930 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} err="failed to get container status \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": rpc error: code = NotFound desc = could not find container \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": container with ID starting with 134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.514955 4688 scope.go:117] "RemoveContainer" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515240 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} err="failed to get container status \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": rpc error: code = NotFound desc = could not find container \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": container with ID starting with 0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515264 4688 scope.go:117] "RemoveContainer" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515511 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} err="failed to get container status \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": rpc error: code = NotFound desc = could not find container \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": container with ID starting with ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515536 4688 scope.go:117] "RemoveContainer" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515943 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} err="failed to get container status \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": rpc error: code = NotFound desc = could not find container \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": container with ID starting with 47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.515965 4688 scope.go:117] "RemoveContainer" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516188 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} err="failed to get container status \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": rpc error: code = NotFound desc = could not find container \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": container with ID starting with fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516212 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516620 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} err="failed to get container status \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516644 4688 scope.go:117] "RemoveContainer" containerID="0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516974 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f"} err="failed to get container status \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": rpc error: code = NotFound desc = could not find container \"0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f\": container with ID starting with 0295428577b14fcc1309416ab52bef8b61fddfaa3b36f987f2c5d0a90606db4f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.516996 4688 scope.go:117] "RemoveContainer" containerID="38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.517342 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5"} err="failed to get container status \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": rpc error: code = NotFound desc = could not find container \"38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5\": container with ID starting with 38d057247182d4be56b96e02427a435eb29190652bc3f88806539829c26a65a5 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.517366 4688 scope.go:117] "RemoveContainer" containerID="189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.517791 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85"} err="failed to get container status \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": rpc error: code = NotFound desc = could not find container \"189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85\": container with ID starting with 189e5c8a8882b7a9957b24f0025e71d2039b172d1aeb3ee9cdf869ee22ab5f85 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.517820 4688 scope.go:117] "RemoveContainer" containerID="d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518031 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9"} err="failed to get container status \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": rpc error: code = NotFound desc = could not find container \"d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9\": container with ID starting with d912ae50a34d6d11ed249d60b375b2f5c2057a57b1eed4ab8b0bffc70cfefec9 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518051 4688 scope.go:117] "RemoveContainer" containerID="134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518337 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4"} err="failed to get container status \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": rpc error: code = NotFound desc = could not find container \"134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4\": container with ID starting with 134d33a6afd2fb51772836fc66b01cfa284818c25e942d5451e75650c45bd7d4 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518359 4688 scope.go:117] "RemoveContainer" containerID="0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518688 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f"} err="failed to get container status \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": rpc error: code = NotFound desc = could not find container \"0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f\": container with ID starting with 0b1b0a2e12a56fdc7b99161d4114455b5c6a83da3a69d55c5d36a247d0992f3f not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518712 4688 scope.go:117] "RemoveContainer" containerID="ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.518987 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2"} err="failed to get container status \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": rpc error: code = NotFound desc = could not find container \"ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2\": container with ID starting with ec819af9c590523adafcc4426e4b6553c0c7a4ea29b5f0a212044e2d2e68a0a2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.519102 4688 scope.go:117] "RemoveContainer" containerID="47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.519490 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2"} err="failed to get container status \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": rpc error: code = NotFound desc = could not find container \"47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2\": container with ID starting with 47569f7f1c680a4ec76cc8a7e8b2e1dc39b549cd6c8eb605c1c465d3769bfbf2 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.519511 4688 scope.go:117] "RemoveContainer" containerID="fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.519900 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8"} err="failed to get container status \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": rpc error: code = NotFound desc = could not find container \"fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8\": container with ID starting with fac1bdfbc885ce12a793899e843f57e18d5c462250145fef059b8d40b043c4e8 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.519927 4688 scope.go:117] "RemoveContainer" containerID="d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.520240 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06"} err="failed to get container status \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": rpc error: code = NotFound desc = could not find container \"d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06\": container with ID starting with d2d850794a866050f661701e2ef62357df9fdb0835dc19a7d590dfe59e3e0e06 not found: ID does not exist" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-env-overrides\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527647 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-netns\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527786 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527817 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-ovn\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-config\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527868 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-bin\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527883 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f2rk\" (UniqueName: \"kubernetes.io/projected/4f42cb8a-8540-4662-9f86-de080571eb43-kube-api-access-4f2rk\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527903 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-slash\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527933 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f42cb8a-8540-4662-9f86-de080571eb43-ovn-node-metrics-cert\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-var-lib-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527976 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-etc-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.527990 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-script-lib\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-systemd-units\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528031 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-kubelet\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-netd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-log-socket\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-node-log\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528101 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-systemd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528145 4688 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528156 4688 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528167 4688 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-kubelet\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528176 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-run-netns\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528186 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528195 4688 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528205 4688 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528213 4688 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-run-systemd\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528221 4688 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528232 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5jsc\" (UniqueName: \"kubernetes.io/projected/cb5dfa5a-bc52-43b4-a301-730ccb234480-kube-api-access-n5jsc\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528240 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5dfa5a-bc52-43b4-a301-730ccb234480-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528248 4688 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-slash\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528257 4688 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-log-socket\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528266 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-netd\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528274 4688 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-systemd-units\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528284 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5dfa5a-bc52-43b4-a301-730ccb234480-host-cni-bin\") on node \"crc\" DevicePath \"\"" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528315 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-systemd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528293 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-env-overrides\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528647 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-run-netns\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528695 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.528727 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-run-ovn\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529279 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-systemd-units\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529312 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-var-lib-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529322 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-config\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529335 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-etc-openvswitch\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-kubelet\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-netd\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-log-socket\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529514 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-node-log\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529544 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-slash\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.529548 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4f42cb8a-8540-4662-9f86-de080571eb43-host-cni-bin\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.530606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4f42cb8a-8540-4662-9f86-de080571eb43-ovnkube-script-lib\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.534687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4f42cb8a-8540-4662-9f86-de080571eb43-ovn-node-metrics-cert\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.548581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f2rk\" (UniqueName: \"kubernetes.io/projected/4f42cb8a-8540-4662-9f86-de080571eb43-kube-api-access-4f2rk\") pod \"ovnkube-node-q74ql\" (UID: \"4f42cb8a-8540-4662-9f86-de080571eb43\") " pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.640029 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rwctq"] Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.643406 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rwctq"] Sep 30 17:25:58 crc kubenswrapper[4688]: I0930 17:25:58.659618 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:25:59 crc kubenswrapper[4688]: I0930 17:25:59.315605 4688 generic.go:334] "Generic (PLEG): container finished" podID="4f42cb8a-8540-4662-9f86-de080571eb43" containerID="77494e8ffba68c28d6906ae2116d2eebcf04348b98849f2600419a7baa25f303" exitCode=0 Sep 30 17:25:59 crc kubenswrapper[4688]: I0930 17:25:59.315690 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerDied","Data":"77494e8ffba68c28d6906ae2116d2eebcf04348b98849f2600419a7baa25f303"} Sep 30 17:25:59 crc kubenswrapper[4688]: I0930 17:25:59.315754 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"defc6cd10e368b602fe8486c5c99903b25cb6f4333442928140bd0409440d5bb"} Sep 30 17:25:59 crc kubenswrapper[4688]: I0930 17:25:59.436769 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5dfa5a-bc52-43b4-a301-730ccb234480" path="/var/lib/kubelet/pods/cb5dfa5a-bc52-43b4-a301-730ccb234480/volumes" Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.327179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"abd805d72ad715c695a8ffaab676a95af03381edb9f54a245c6637db038a9ce8"} Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.328078 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"c41cc2b32cc5b51fe62396188ec74ad63fbc84a9dcd72dcd13f7efbbe2f50dd6"} Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.328091 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"66c6010f433bccb00ead0524375afbebd553d9e46b9b4417049eb44a510a06b4"} Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.328100 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"ca0238c181880923a1b8c381120f542dc97d3f25dde83c1ca693a64c67416b39"} Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.328110 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"5bfe7030a33ea221453fe7dba7888213779736198084358ee9e3871e9c20688c"} Sep 30 17:26:00 crc kubenswrapper[4688]: I0930 17:26:00.328120 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"6587f702cd9cdc44835451b0a844e6dcb29590ea19c4fcbe5f844ee1a7d5035c"} Sep 30 17:26:03 crc kubenswrapper[4688]: I0930 17:26:03.356195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"78feb006368d59c850972415cc27b3b2f392d74ea670660c5f1fb72f89136cc2"} Sep 30 17:26:05 crc kubenswrapper[4688]: I0930 17:26:05.376083 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" event={"ID":"4f42cb8a-8540-4662-9f86-de080571eb43","Type":"ContainerStarted","Data":"1de1cde1e21fd2236fd4af2a30dcaeea7fbbfed9683d83963fcfbc3e73dda566"} Sep 30 17:26:05 crc kubenswrapper[4688]: I0930 17:26:05.376518 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:05 crc kubenswrapper[4688]: I0930 17:26:05.376533 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:05 crc kubenswrapper[4688]: I0930 17:26:05.408023 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:05 crc kubenswrapper[4688]: I0930 17:26:05.417862 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" podStartSLOduration=7.417837359 podStartE2EDuration="7.417837359s" podCreationTimestamp="2025-09-30 17:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:26:05.416236588 +0000 UTC m=+622.711674146" watchObservedRunningTime="2025-09-30 17:26:05.417837359 +0000 UTC m=+622.713274917" Sep 30 17:26:06 crc kubenswrapper[4688]: I0930 17:26:06.383074 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:06 crc kubenswrapper[4688]: I0930 17:26:06.424170 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:12 crc kubenswrapper[4688]: I0930 17:26:12.430056 4688 scope.go:117] "RemoveContainer" containerID="52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776" Sep 30 17:26:12 crc kubenswrapper[4688]: E0930 17:26:12.432052 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9wbfw_openshift-multus(03cb23bc-dcc1-46e7-b238-67a6163f456d)\"" pod="openshift-multus/multus-9wbfw" podUID="03cb23bc-dcc1-46e7-b238-67a6163f456d" Sep 30 17:26:27 crc kubenswrapper[4688]: I0930 17:26:27.429542 4688 scope.go:117] "RemoveContainer" containerID="52b96a4ef6fd89ececac978f4f3cf7408a21cb7ddefbfc26a626e86d40b5e776" Sep 30 17:26:28 crc kubenswrapper[4688]: I0930 17:26:28.550296 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/2.log" Sep 30 17:26:28 crc kubenswrapper[4688]: I0930 17:26:28.553036 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/1.log" Sep 30 17:26:28 crc kubenswrapper[4688]: I0930 17:26:28.553101 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9wbfw" event={"ID":"03cb23bc-dcc1-46e7-b238-67a6163f456d","Type":"ContainerStarted","Data":"9cd506c88dc28d994dd7e060d88f03395c1e711f2f93d4b48060b21811afe147"} Sep 30 17:26:28 crc kubenswrapper[4688]: I0930 17:26:28.696001 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q74ql" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.474531 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv"] Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.476620 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.478661 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.494696 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv"] Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.677850 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzntl\" (UniqueName: \"kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.677925 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.678036 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.779501 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.779622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzntl\" (UniqueName: \"kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.779659 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.780272 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.780292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:35 crc kubenswrapper[4688]: I0930 17:26:35.808379 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzntl\" (UniqueName: \"kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:36 crc kubenswrapper[4688]: I0930 17:26:36.097503 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:36 crc kubenswrapper[4688]: I0930 17:26:36.355803 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv"] Sep 30 17:26:36 crc kubenswrapper[4688]: I0930 17:26:36.605181 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerStarted","Data":"1f59071922a02c73f671752a93e7721842024b9c08e041a114fe81801a733b67"} Sep 30 17:26:36 crc kubenswrapper[4688]: I0930 17:26:36.605645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerStarted","Data":"5c8ea646c788fb307e544ae23c750cde46f2fc094bd94824beaad2c64c7cc96b"} Sep 30 17:26:37 crc kubenswrapper[4688]: I0930 17:26:37.611652 4688 generic.go:334] "Generic (PLEG): container finished" podID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerID="1f59071922a02c73f671752a93e7721842024b9c08e041a114fe81801a733b67" exitCode=0 Sep 30 17:26:37 crc kubenswrapper[4688]: I0930 17:26:37.611871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerDied","Data":"1f59071922a02c73f671752a93e7721842024b9c08e041a114fe81801a733b67"} Sep 30 17:26:39 crc kubenswrapper[4688]: I0930 17:26:39.648936 4688 generic.go:334] "Generic (PLEG): container finished" podID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerID="f90c11ef141c440de2a854bb42b68268dea131fcca1b5943de735220a65a8912" exitCode=0 Sep 30 17:26:39 crc kubenswrapper[4688]: I0930 17:26:39.649047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerDied","Data":"f90c11ef141c440de2a854bb42b68268dea131fcca1b5943de735220a65a8912"} Sep 30 17:26:40 crc kubenswrapper[4688]: I0930 17:26:40.660560 4688 generic.go:334] "Generic (PLEG): container finished" podID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerID="a2bcad15fa7ca19d7a7b33e55a8078e23ada77cd8d5fa22caa642e506bc76e26" exitCode=0 Sep 30 17:26:40 crc kubenswrapper[4688]: I0930 17:26:40.660644 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerDied","Data":"a2bcad15fa7ca19d7a7b33e55a8078e23ada77cd8d5fa22caa642e506bc76e26"} Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.907192 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.974185 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzntl\" (UniqueName: \"kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl\") pod \"add2c7c8-faaa-40f0-afce-74fa53bb3662\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.974309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util\") pod \"add2c7c8-faaa-40f0-afce-74fa53bb3662\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.974335 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle\") pod \"add2c7c8-faaa-40f0-afce-74fa53bb3662\" (UID: \"add2c7c8-faaa-40f0-afce-74fa53bb3662\") " Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.975417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle" (OuterVolumeSpecName: "bundle") pod "add2c7c8-faaa-40f0-afce-74fa53bb3662" (UID: "add2c7c8-faaa-40f0-afce-74fa53bb3662"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:26:41 crc kubenswrapper[4688]: I0930 17:26:41.979879 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl" (OuterVolumeSpecName: "kube-api-access-kzntl") pod "add2c7c8-faaa-40f0-afce-74fa53bb3662" (UID: "add2c7c8-faaa-40f0-afce-74fa53bb3662"). InnerVolumeSpecName "kube-api-access-kzntl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.075698 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzntl\" (UniqueName: \"kubernetes.io/projected/add2c7c8-faaa-40f0-afce-74fa53bb3662-kube-api-access-kzntl\") on node \"crc\" DevicePath \"\"" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.075753 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.229196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util" (OuterVolumeSpecName: "util") pod "add2c7c8-faaa-40f0-afce-74fa53bb3662" (UID: "add2c7c8-faaa-40f0-afce-74fa53bb3662"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.278033 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/add2c7c8-faaa-40f0-afce-74fa53bb3662-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.678084 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" event={"ID":"add2c7c8-faaa-40f0-afce-74fa53bb3662","Type":"ContainerDied","Data":"5c8ea646c788fb307e544ae23c750cde46f2fc094bd94824beaad2c64c7cc96b"} Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.678502 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8ea646c788fb307e544ae23c750cde46f2fc094bd94824beaad2c64c7cc96b" Sep 30 17:26:42 crc kubenswrapper[4688]: I0930 17:26:42.678154 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv" Sep 30 17:26:43 crc kubenswrapper[4688]: I0930 17:26:43.645010 4688 scope.go:117] "RemoveContainer" containerID="d6a181c35cb2f7fd8127fa2ec43f7c545cf866a35d3d89b7419ce14721a13cb1" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.229971 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd"] Sep 30 17:26:44 crc kubenswrapper[4688]: E0930 17:26:44.230682 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="extract" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.230705 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="extract" Sep 30 17:26:44 crc kubenswrapper[4688]: E0930 17:26:44.230731 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="pull" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.230743 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="pull" Sep 30 17:26:44 crc kubenswrapper[4688]: E0930 17:26:44.230767 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="util" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.230776 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="util" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.230896 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="add2c7c8-faaa-40f0-afce-74fa53bb3662" containerName="extract" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.231442 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.233760 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.234534 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.234768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-s4xcp" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.241307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd"] Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.304141 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxxq\" (UniqueName: \"kubernetes.io/projected/4e9693f8-bd8e-4ed6-8b39-0d7b926133c6-kube-api-access-bkxxq\") pod \"nmstate-operator-5d6f6cfd66-hmwrd\" (UID: \"4e9693f8-bd8e-4ed6-8b39-0d7b926133c6\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.405746 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkxxq\" (UniqueName: \"kubernetes.io/projected/4e9693f8-bd8e-4ed6-8b39-0d7b926133c6-kube-api-access-bkxxq\") pod \"nmstate-operator-5d6f6cfd66-hmwrd\" (UID: \"4e9693f8-bd8e-4ed6-8b39-0d7b926133c6\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.426486 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkxxq\" (UniqueName: \"kubernetes.io/projected/4e9693f8-bd8e-4ed6-8b39-0d7b926133c6-kube-api-access-bkxxq\") pod \"nmstate-operator-5d6f6cfd66-hmwrd\" (UID: \"4e9693f8-bd8e-4ed6-8b39-0d7b926133c6\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.547088 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.704425 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9wbfw_03cb23bc-dcc1-46e7-b238-67a6163f456d/kube-multus/2.log" Sep 30 17:26:44 crc kubenswrapper[4688]: I0930 17:26:44.773731 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd"] Sep 30 17:26:45 crc kubenswrapper[4688]: I0930 17:26:45.713837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" event={"ID":"4e9693f8-bd8e-4ed6-8b39-0d7b926133c6","Type":"ContainerStarted","Data":"15221e238487521e9f8f179f1c61fc2b3913df0659987b204a87289b5c224099"} Sep 30 17:26:47 crc kubenswrapper[4688]: I0930 17:26:47.732619 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" event={"ID":"4e9693f8-bd8e-4ed6-8b39-0d7b926133c6","Type":"ContainerStarted","Data":"e80d80dea78254d0f7780b2620090f808f24aa86b07da2af4d54ef1972bd02d2"} Sep 30 17:26:47 crc kubenswrapper[4688]: I0930 17:26:47.750952 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-hmwrd" podStartSLOduration=1.629260605 podStartE2EDuration="3.750924034s" podCreationTimestamp="2025-09-30 17:26:44 +0000 UTC" firstStartedPulling="2025-09-30 17:26:44.783897262 +0000 UTC m=+662.079334830" lastFinishedPulling="2025-09-30 17:26:46.905560701 +0000 UTC m=+664.200998259" observedRunningTime="2025-09-30 17:26:47.749878957 +0000 UTC m=+665.045316535" watchObservedRunningTime="2025-09-30 17:26:47.750924034 +0000 UTC m=+665.046361612" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.663235 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-lzq79"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.664455 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.666904 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tsm96" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.680773 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5gb\" (UniqueName: \"kubernetes.io/projected/b3c49134-fd6f-43f7-b097-32d0e1384e5f-kube-api-access-5j5gb\") pod \"nmstate-metrics-58fcddf996-lzq79\" (UID: \"b3c49134-fd6f-43f7-b097-32d0e1384e5f\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.684046 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-lzq79"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.694166 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.700523 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.703781 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.707711 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-44mqf"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.710138 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.712215 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784016 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j5gb\" (UniqueName: \"kubernetes.io/projected/b3c49134-fd6f-43f7-b097-32d0e1384e5f-kube-api-access-5j5gb\") pod \"nmstate-metrics-58fcddf996-lzq79\" (UID: \"b3c49134-fd6f-43f7-b097-32d0e1384e5f\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784082 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-ovs-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784127 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8twg\" (UniqueName: \"kubernetes.io/projected/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-kube-api-access-p8twg\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-dbus-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784180 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxlw6\" (UniqueName: \"kubernetes.io/projected/3510490b-a2cb-45a6-9bce-e671a256cde6-kube-api-access-qxlw6\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-nmstate-lock\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.784258 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.809800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j5gb\" (UniqueName: \"kubernetes.io/projected/b3c49134-fd6f-43f7-b097-32d0e1384e5f-kube-api-access-5j5gb\") pod \"nmstate-metrics-58fcddf996-lzq79\" (UID: \"b3c49134-fd6f-43f7-b097-32d0e1384e5f\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.812711 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.813500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.816599 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.816817 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.817069 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7r85j" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.825257 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.884882 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-ovs-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.884929 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8twg\" (UniqueName: \"kubernetes.io/projected/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-kube-api-access-p8twg\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.884948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-dbus-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.884971 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.884991 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxlw6\" (UniqueName: \"kubernetes.io/projected/3510490b-a2cb-45a6-9bce-e671a256cde6-kube-api-access-qxlw6\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885008 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-nmstate-lock\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c329119-f853-41a4-89de-0f5358bc030d-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-ovs-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-dbus-socket\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885923 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3510490b-a2cb-45a6-9bce-e671a256cde6-nmstate-lock\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885077 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/5c329119-f853-41a4-89de-0f5358bc030d-kube-api-access-sznc6\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.885969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.889257 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.901069 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8twg\" (UniqueName: \"kubernetes.io/projected/5391ab76-cadc-44fb-b404-f1dd66d8f5f4-kube-api-access-p8twg\") pod \"nmstate-webhook-6d689559c5-j4lbr\" (UID: \"5391ab76-cadc-44fb-b404-f1dd66d8f5f4\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.901832 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxlw6\" (UniqueName: \"kubernetes.io/projected/3510490b-a2cb-45a6-9bce-e671a256cde6-kube-api-access-qxlw6\") pod \"nmstate-handler-44mqf\" (UID: \"3510490b-a2cb-45a6-9bce-e671a256cde6\") " pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.986996 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c329119-f853-41a4-89de-0f5358bc030d-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.987081 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/5c329119-f853-41a4-89de-0f5358bc030d-kube-api-access-sznc6\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.987148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: E0930 17:26:48.987313 4688 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Sep 30 17:26:48 crc kubenswrapper[4688]: E0930 17:26:48.987384 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert podName:5c329119-f853-41a4-89de-0f5358bc030d nodeName:}" failed. No retries permitted until 2025-09-30 17:26:49.487358969 +0000 UTC m=+666.782796517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert") pod "nmstate-console-plugin-864bb6dfb5-wpjgb" (UID: "5c329119-f853-41a4-89de-0f5358bc030d") : secret "plugin-serving-cert" not found Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.988690 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5c329119-f853-41a4-89de-0f5358bc030d-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.991323 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9d49ccf8f-bhvpr"] Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.992257 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:48 crc kubenswrapper[4688]: I0930 17:26:48.999717 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.019983 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/5c329119-f853-41a4-89de-0f5358bc030d-kube-api-access-sznc6\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.050430 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.054461 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9d49ccf8f-bhvpr"] Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.058085 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089509 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-oauth-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rm2\" (UniqueName: \"kubernetes.io/projected/b4d683be-4747-4554-994c-1d2b8588c78e-kube-api-access-z5rm2\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-console-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089704 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-service-ca\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-oauth-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.089777 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-trusted-ca-bundle\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.191678 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192123 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rm2\" (UniqueName: \"kubernetes.io/projected/b4d683be-4747-4554-994c-1d2b8588c78e-kube-api-access-z5rm2\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-oauth-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192188 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-console-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-service-ca\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192248 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-oauth-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.192273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-trusted-ca-bundle\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.194405 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-service-ca\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.194684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-oauth-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.194764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-console-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.194931 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d683be-4747-4554-994c-1d2b8588c78e-trusted-ca-bundle\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.202422 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-oauth-config\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.203031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d683be-4747-4554-994c-1d2b8588c78e-console-serving-cert\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.205654 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-lzq79"] Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.213587 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rm2\" (UniqueName: \"kubernetes.io/projected/b4d683be-4747-4554-994c-1d2b8588c78e-kube-api-access-z5rm2\") pod \"console-9d49ccf8f-bhvpr\" (UID: \"b4d683be-4747-4554-994c-1d2b8588c78e\") " pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.283576 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr"] Sep 30 17:26:49 crc kubenswrapper[4688]: W0930 17:26:49.284866 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5391ab76_cadc_44fb_b404_f1dd66d8f5f4.slice/crio-fbb87dc22b5344dccb87b476299ac00fd82348713b0eaff46df29c5718815a85 WatchSource:0}: Error finding container fbb87dc22b5344dccb87b476299ac00fd82348713b0eaff46df29c5718815a85: Status 404 returned error can't find the container with id fbb87dc22b5344dccb87b476299ac00fd82348713b0eaff46df29c5718815a85 Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.311999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.497301 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.504481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c329119-f853-41a4-89de-0f5358bc030d-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-wpjgb\" (UID: \"5c329119-f853-41a4-89de-0f5358bc030d\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.525865 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9d49ccf8f-bhvpr"] Sep 30 17:26:49 crc kubenswrapper[4688]: W0930 17:26:49.533891 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d683be_4747_4554_994c_1d2b8588c78e.slice/crio-3c45dbccc9987baf6ca1385e56fc86e341876b911cd8ffd4bca8a5dca08db0a4 WatchSource:0}: Error finding container 3c45dbccc9987baf6ca1385e56fc86e341876b911cd8ffd4bca8a5dca08db0a4: Status 404 returned error can't find the container with id 3c45dbccc9987baf6ca1385e56fc86e341876b911cd8ffd4bca8a5dca08db0a4 Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.741299 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.746636 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44mqf" event={"ID":"3510490b-a2cb-45a6-9bce-e671a256cde6","Type":"ContainerStarted","Data":"d92447e43a972493b2dfcfc56b4961699a67d5740f13d394214e5c45f873caf0"} Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.748249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" event={"ID":"b3c49134-fd6f-43f7-b097-32d0e1384e5f","Type":"ContainerStarted","Data":"63fcc165fdcca3db34eecdd234be6eecf72e17c2612d8fa97141230b2be14509"} Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.749269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" event={"ID":"5391ab76-cadc-44fb-b404-f1dd66d8f5f4","Type":"ContainerStarted","Data":"fbb87dc22b5344dccb87b476299ac00fd82348713b0eaff46df29c5718815a85"} Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.750839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9d49ccf8f-bhvpr" event={"ID":"b4d683be-4747-4554-994c-1d2b8588c78e","Type":"ContainerStarted","Data":"042115d69faa31a7009c99d1c8b5beb6d3923c26d2f5ae1d4dc77e89d25e466e"} Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.750861 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9d49ccf8f-bhvpr" event={"ID":"b4d683be-4747-4554-994c-1d2b8588c78e","Type":"ContainerStarted","Data":"3c45dbccc9987baf6ca1385e56fc86e341876b911cd8ffd4bca8a5dca08db0a4"} Sep 30 17:26:49 crc kubenswrapper[4688]: I0930 17:26:49.770671 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9d49ccf8f-bhvpr" podStartSLOduration=1.770638828 podStartE2EDuration="1.770638828s" podCreationTimestamp="2025-09-30 17:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:26:49.767946468 +0000 UTC m=+667.063384036" watchObservedRunningTime="2025-09-30 17:26:49.770638828 +0000 UTC m=+667.066076386" Sep 30 17:26:50 crc kubenswrapper[4688]: I0930 17:26:50.010620 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb"] Sep 30 17:26:50 crc kubenswrapper[4688]: I0930 17:26:50.760187 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" event={"ID":"5c329119-f853-41a4-89de-0f5358bc030d","Type":"ContainerStarted","Data":"7ca5600ddbdba4ef10ff9c50da7b0598f34e0c325664cf4725ae5b5e0e340b9c"} Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.781763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" event={"ID":"b3c49134-fd6f-43f7-b097-32d0e1384e5f","Type":"ContainerStarted","Data":"4eda956649d23c151d154ed8c26fcf9632430ba5dc1ba4bcc7cf161ce76e99ec"} Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.785139 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" event={"ID":"5391ab76-cadc-44fb-b404-f1dd66d8f5f4","Type":"ContainerStarted","Data":"7d2f81e3079b6c2688dbaea6b24fdb85068a9ffa48c5716e65b900c9fc653569"} Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.785250 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.787046 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-44mqf" event={"ID":"3510490b-a2cb-45a6-9bce-e671a256cde6","Type":"ContainerStarted","Data":"827c0e7d481245203513f23b6aa5ef82def8b5e6db0dd519955335db3a74963c"} Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.787237 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.807337 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" podStartSLOduration=2.330220227 podStartE2EDuration="5.807305256s" podCreationTimestamp="2025-09-30 17:26:48 +0000 UTC" firstStartedPulling="2025-09-30 17:26:49.288451269 +0000 UTC m=+666.583888827" lastFinishedPulling="2025-09-30 17:26:52.765536298 +0000 UTC m=+670.060973856" observedRunningTime="2025-09-30 17:26:53.80323384 +0000 UTC m=+671.098671398" watchObservedRunningTime="2025-09-30 17:26:53.807305256 +0000 UTC m=+671.102742814" Sep 30 17:26:53 crc kubenswrapper[4688]: I0930 17:26:53.821202 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-44mqf" podStartSLOduration=2.159755012 podStartE2EDuration="5.821184954s" podCreationTimestamp="2025-09-30 17:26:48 +0000 UTC" firstStartedPulling="2025-09-30 17:26:49.104684941 +0000 UTC m=+666.400122499" lastFinishedPulling="2025-09-30 17:26:52.766114843 +0000 UTC m=+670.061552441" observedRunningTime="2025-09-30 17:26:53.818725481 +0000 UTC m=+671.114163039" watchObservedRunningTime="2025-09-30 17:26:53.821184954 +0000 UTC m=+671.116622512" Sep 30 17:26:54 crc kubenswrapper[4688]: I0930 17:26:54.797122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" event={"ID":"5c329119-f853-41a4-89de-0f5358bc030d","Type":"ContainerStarted","Data":"eb7e355050acc7085b6d09ba8504ce6f572a2f7bba6e2feb8a7edfffcb106ad0"} Sep 30 17:26:54 crc kubenswrapper[4688]: I0930 17:26:54.823620 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-wpjgb" podStartSLOduration=2.861438731 podStartE2EDuration="6.823590314s" podCreationTimestamp="2025-09-30 17:26:48 +0000 UTC" firstStartedPulling="2025-09-30 17:26:50.021339055 +0000 UTC m=+667.316776613" lastFinishedPulling="2025-09-30 17:26:53.983490638 +0000 UTC m=+671.278928196" observedRunningTime="2025-09-30 17:26:54.815944327 +0000 UTC m=+672.111381895" watchObservedRunningTime="2025-09-30 17:26:54.823590314 +0000 UTC m=+672.119027872" Sep 30 17:26:56 crc kubenswrapper[4688]: I0930 17:26:56.812916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" event={"ID":"b3c49134-fd6f-43f7-b097-32d0e1384e5f","Type":"ContainerStarted","Data":"d6d750744e42872dfb1f85bd98f1a5abd662a5f391da60f38692ef120b908a12"} Sep 30 17:26:56 crc kubenswrapper[4688]: I0930 17:26:56.835291 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58fcddf996-lzq79" podStartSLOduration=1.660631225 podStartE2EDuration="8.835271421s" podCreationTimestamp="2025-09-30 17:26:48 +0000 UTC" firstStartedPulling="2025-09-30 17:26:49.216866819 +0000 UTC m=+666.512304387" lastFinishedPulling="2025-09-30 17:26:56.391507025 +0000 UTC m=+673.686944583" observedRunningTime="2025-09-30 17:26:56.832085248 +0000 UTC m=+674.127522816" watchObservedRunningTime="2025-09-30 17:26:56.835271421 +0000 UTC m=+674.130708979" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.084343 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-44mqf" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.312822 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.312931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.320290 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.838406 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9d49ccf8f-bhvpr" Sep 30 17:26:59 crc kubenswrapper[4688]: I0930 17:26:59.904109 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:27:09 crc kubenswrapper[4688]: I0930 17:27:09.058110 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6d689559c5-j4lbr" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.398515 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx"] Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.400434 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.403954 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.419056 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx"] Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.601132 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmhhd\" (UniqueName: \"kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.601246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.601496 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.702038 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.702150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.702181 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmhhd\" (UniqueName: \"kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.703126 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.703180 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:23 crc kubenswrapper[4688]: I0930 17:27:23.733216 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmhhd\" (UniqueName: \"kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:24 crc kubenswrapper[4688]: I0930 17:27:24.028427 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:24 crc kubenswrapper[4688]: I0930 17:27:24.476757 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx"] Sep 30 17:27:24 crc kubenswrapper[4688]: I0930 17:27:24.954509 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wl7nh" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" containerID="cri-o://1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2" gracePeriod=15 Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.017363 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" event={"ID":"8856e90e-bbf3-4c5b-a038-0380a7b7c70d","Type":"ContainerStarted","Data":"0ac2b918fa1c0bc3ade71b7bc313098e1ef1110262976ba5bb1976031b527f68"} Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.795407 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wl7nh_19515a5a-5268-41e3-8288-0e41a71c9dc4/console/0.log" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.795845 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936506 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936683 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936786 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936848 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936868 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.936892 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdgvz\" (UniqueName: \"kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz\") pod \"19515a5a-5268-41e3-8288-0e41a71c9dc4\" (UID: \"19515a5a-5268-41e3-8288-0e41a71c9dc4\") " Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.937610 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca" (OuterVolumeSpecName: "service-ca") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.937755 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.937768 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config" (OuterVolumeSpecName: "console-config") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.938118 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.944235 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.944417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz" (OuterVolumeSpecName: "kube-api-access-cdgvz") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "kube-api-access-cdgvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:27:25 crc kubenswrapper[4688]: I0930 17:27:25.944799 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "19515a5a-5268-41e3-8288-0e41a71c9dc4" (UID: "19515a5a-5268-41e3-8288-0e41a71c9dc4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025239 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wl7nh_19515a5a-5268-41e3-8288-0e41a71c9dc4/console/0.log" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025314 4688 generic.go:334] "Generic (PLEG): container finished" podID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerID="1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2" exitCode=2 Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wl7nh" event={"ID":"19515a5a-5268-41e3-8288-0e41a71c9dc4","Type":"ContainerDied","Data":"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2"} Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025455 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wl7nh" event={"ID":"19515a5a-5268-41e3-8288-0e41a71c9dc4","Type":"ContainerDied","Data":"a558776cc38d7f2d0c10a9c75995c382c093338ef73c1895e8e65ffd30557300"} Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025448 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wl7nh" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.025505 4688 scope.go:117] "RemoveContainer" containerID="1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.036209 4688 generic.go:334] "Generic (PLEG): container finished" podID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerID="9cc1041a5bfcfdf7b2307a5b3c3610bf9951bd245f22ccffe04f533ed72cdc87" exitCode=0 Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.036288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" event={"ID":"8856e90e-bbf3-4c5b-a038-0380a7b7c70d","Type":"ContainerDied","Data":"9cc1041a5bfcfdf7b2307a5b3c3610bf9951bd245f22ccffe04f533ed72cdc87"} Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038327 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038373 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038390 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038404 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038415 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19515a5a-5268-41e3-8288-0e41a71c9dc4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038428 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdgvz\" (UniqueName: \"kubernetes.io/projected/19515a5a-5268-41e3-8288-0e41a71c9dc4-kube-api-access-cdgvz\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.038440 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/19515a5a-5268-41e3-8288-0e41a71c9dc4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.053949 4688 scope.go:117] "RemoveContainer" containerID="1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2" Sep 30 17:27:26 crc kubenswrapper[4688]: E0930 17:27:26.054448 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2\": container with ID starting with 1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2 not found: ID does not exist" containerID="1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.054496 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2"} err="failed to get container status \"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2\": rpc error: code = NotFound desc = could not find container \"1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2\": container with ID starting with 1d0ea555a7e90583e8bdd45c57f74d8d40e84b1a0ea073d2d5c076f5d342a3b2 not found: ID does not exist" Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.075864 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:27:26 crc kubenswrapper[4688]: I0930 17:27:26.079634 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wl7nh"] Sep 30 17:27:27 crc kubenswrapper[4688]: I0930 17:27:27.438630 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" path="/var/lib/kubelet/pods/19515a5a-5268-41e3-8288-0e41a71c9dc4/volumes" Sep 30 17:27:29 crc kubenswrapper[4688]: I0930 17:27:29.060063 4688 generic.go:334] "Generic (PLEG): container finished" podID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerID="55eb267ae6a1efdbb3a2116255d650220f6f0a31d5ad420d53cb2f37592c0bdc" exitCode=0 Sep 30 17:27:29 crc kubenswrapper[4688]: I0930 17:27:29.060177 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" event={"ID":"8856e90e-bbf3-4c5b-a038-0380a7b7c70d","Type":"ContainerDied","Data":"55eb267ae6a1efdbb3a2116255d650220f6f0a31d5ad420d53cb2f37592c0bdc"} Sep 30 17:27:30 crc kubenswrapper[4688]: I0930 17:27:30.069753 4688 generic.go:334] "Generic (PLEG): container finished" podID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerID="308df1a83c8f449961de382d6e91d381a7e06fd1cfdc063cd3267b8a9f2df62c" exitCode=0 Sep 30 17:27:30 crc kubenswrapper[4688]: I0930 17:27:30.069819 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" event={"ID":"8856e90e-bbf3-4c5b-a038-0380a7b7c70d","Type":"ContainerDied","Data":"308df1a83c8f449961de382d6e91d381a7e06fd1cfdc063cd3267b8a9f2df62c"} Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.341319 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.421243 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util\") pod \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.421411 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle\") pod \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.421542 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmhhd\" (UniqueName: \"kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd\") pod \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\" (UID: \"8856e90e-bbf3-4c5b-a038-0380a7b7c70d\") " Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.424201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle" (OuterVolumeSpecName: "bundle") pod "8856e90e-bbf3-4c5b-a038-0380a7b7c70d" (UID: "8856e90e-bbf3-4c5b-a038-0380a7b7c70d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.429192 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd" (OuterVolumeSpecName: "kube-api-access-rmhhd") pod "8856e90e-bbf3-4c5b-a038-0380a7b7c70d" (UID: "8856e90e-bbf3-4c5b-a038-0380a7b7c70d"). InnerVolumeSpecName "kube-api-access-rmhhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.488748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util" (OuterVolumeSpecName: "util") pod "8856e90e-bbf3-4c5b-a038-0380a7b7c70d" (UID: "8856e90e-bbf3-4c5b-a038-0380a7b7c70d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.522898 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmhhd\" (UniqueName: \"kubernetes.io/projected/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-kube-api-access-rmhhd\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.522949 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:31 crc kubenswrapper[4688]: I0930 17:27:31.522963 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8856e90e-bbf3-4c5b-a038-0380a7b7c70d-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:27:32 crc kubenswrapper[4688]: I0930 17:27:32.085660 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" event={"ID":"8856e90e-bbf3-4c5b-a038-0380a7b7c70d","Type":"ContainerDied","Data":"0ac2b918fa1c0bc3ade71b7bc313098e1ef1110262976ba5bb1976031b527f68"} Sep 30 17:27:32 crc kubenswrapper[4688]: I0930 17:27:32.085720 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx" Sep 30 17:27:32 crc kubenswrapper[4688]: I0930 17:27:32.085720 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac2b918fa1c0bc3ade71b7bc313098e1ef1110262976ba5bb1976031b527f68" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.451398 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv"] Sep 30 17:27:41 crc kubenswrapper[4688]: E0930 17:27:41.452373 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="util" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452582 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="util" Sep 30 17:27:41 crc kubenswrapper[4688]: E0930 17:27:41.452663 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="extract" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452673 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="extract" Sep 30 17:27:41 crc kubenswrapper[4688]: E0930 17:27:41.452687 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452695 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" Sep 30 17:27:41 crc kubenswrapper[4688]: E0930 17:27:41.452705 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="pull" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452711 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="pull" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452823 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8856e90e-bbf3-4c5b-a038-0380a7b7c70d" containerName="extract" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.452847 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="19515a5a-5268-41e3-8288-0e41a71c9dc4" containerName="console" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.453370 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.462440 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vkcfq" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.462931 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.463020 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.464287 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.464914 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.472209 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqt57\" (UniqueName: \"kubernetes.io/projected/dfdf8b45-1929-49f8-95ef-94751966a5c3-kube-api-access-mqt57\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.472307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-apiservice-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.472367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-webhook-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.479421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv"] Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.573967 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqt57\" (UniqueName: \"kubernetes.io/projected/dfdf8b45-1929-49f8-95ef-94751966a5c3-kube-api-access-mqt57\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.574018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-apiservice-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.574059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-webhook-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.581656 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-apiservice-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.582420 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfdf8b45-1929-49f8-95ef-94751966a5c3-webhook-cert\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.597250 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqt57\" (UniqueName: \"kubernetes.io/projected/dfdf8b45-1929-49f8-95ef-94751966a5c3-kube-api-access-mqt57\") pod \"metallb-operator-controller-manager-6d94b69f58-xr5pv\" (UID: \"dfdf8b45-1929-49f8-95ef-94751966a5c3\") " pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.774512 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.802166 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v"] Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.803712 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.810438 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.810512 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-knjwn" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.810518 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.816838 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v"] Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.879183 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjk2\" (UniqueName: \"kubernetes.io/projected/0f7203cd-f438-40a7-b09c-210c6afa21ff-kube-api-access-5pjk2\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.879237 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-webhook-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.879344 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-apiservice-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.980528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjk2\" (UniqueName: \"kubernetes.io/projected/0f7203cd-f438-40a7-b09c-210c6afa21ff-kube-api-access-5pjk2\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.981100 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-webhook-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:41 crc kubenswrapper[4688]: I0930 17:27:41.982045 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-apiservice-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.000582 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-webhook-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.001298 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0f7203cd-f438-40a7-b09c-210c6afa21ff-apiservice-cert\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.058987 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjk2\" (UniqueName: \"kubernetes.io/projected/0f7203cd-f438-40a7-b09c-210c6afa21ff-kube-api-access-5pjk2\") pod \"metallb-operator-webhook-server-85b8f7c4f8-d477v\" (UID: \"0f7203cd-f438-40a7-b09c-210c6afa21ff\") " pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.161233 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.197352 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv"] Sep 30 17:27:42 crc kubenswrapper[4688]: W0930 17:27:42.217952 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfdf8b45_1929_49f8_95ef_94751966a5c3.slice/crio-7558acf72bf0abf68b21c5f413118c81fd340408fb82803c23aabfd8c6d92827 WatchSource:0}: Error finding container 7558acf72bf0abf68b21c5f413118c81fd340408fb82803c23aabfd8c6d92827: Status 404 returned error can't find the container with id 7558acf72bf0abf68b21c5f413118c81fd340408fb82803c23aabfd8c6d92827 Sep 30 17:27:42 crc kubenswrapper[4688]: I0930 17:27:42.438199 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v"] Sep 30 17:27:42 crc kubenswrapper[4688]: W0930 17:27:42.453383 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f7203cd_f438_40a7_b09c_210c6afa21ff.slice/crio-69f5036a2057eb017aa840321ca033bf1746ee227fb7a6e19ffdf6fcb9bdab71 WatchSource:0}: Error finding container 69f5036a2057eb017aa840321ca033bf1746ee227fb7a6e19ffdf6fcb9bdab71: Status 404 returned error can't find the container with id 69f5036a2057eb017aa840321ca033bf1746ee227fb7a6e19ffdf6fcb9bdab71 Sep 30 17:27:43 crc kubenswrapper[4688]: I0930 17:27:43.164911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" event={"ID":"dfdf8b45-1929-49f8-95ef-94751966a5c3","Type":"ContainerStarted","Data":"7558acf72bf0abf68b21c5f413118c81fd340408fb82803c23aabfd8c6d92827"} Sep 30 17:27:43 crc kubenswrapper[4688]: I0930 17:27:43.166445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" event={"ID":"0f7203cd-f438-40a7-b09c-210c6afa21ff","Type":"ContainerStarted","Data":"69f5036a2057eb017aa840321ca033bf1746ee227fb7a6e19ffdf6fcb9bdab71"} Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.208356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" event={"ID":"0f7203cd-f438-40a7-b09c-210c6afa21ff","Type":"ContainerStarted","Data":"c5fdab9335f5632d67fed98bbd80fca2ccb7762f20704d5784cdbd2e2efb54c8"} Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.208839 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.210340 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" event={"ID":"dfdf8b45-1929-49f8-95ef-94751966a5c3","Type":"ContainerStarted","Data":"11145fbe479467c1c81e2cd796f854e50e452b082a51d5037f2e1b78deef48e4"} Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.210491 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.235702 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" podStartSLOduration=1.894228584 podStartE2EDuration="7.235682814s" podCreationTimestamp="2025-09-30 17:27:41 +0000 UTC" firstStartedPulling="2025-09-30 17:27:42.458866859 +0000 UTC m=+719.754304417" lastFinishedPulling="2025-09-30 17:27:47.800321089 +0000 UTC m=+725.095758647" observedRunningTime="2025-09-30 17:27:48.232740678 +0000 UTC m=+725.528178246" watchObservedRunningTime="2025-09-30 17:27:48.235682814 +0000 UTC m=+725.531120362" Sep 30 17:27:48 crc kubenswrapper[4688]: I0930 17:27:48.262522 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" podStartSLOduration=1.735523836 podStartE2EDuration="7.262496817s" podCreationTimestamp="2025-09-30 17:27:41 +0000 UTC" firstStartedPulling="2025-09-30 17:27:42.226563309 +0000 UTC m=+719.522000867" lastFinishedPulling="2025-09-30 17:27:47.75353629 +0000 UTC m=+725.048973848" observedRunningTime="2025-09-30 17:27:48.258717369 +0000 UTC m=+725.554154927" watchObservedRunningTime="2025-09-30 17:27:48.262496817 +0000 UTC m=+725.557934375" Sep 30 17:27:52 crc kubenswrapper[4688]: I0930 17:27:52.546425 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:27:52 crc kubenswrapper[4688]: I0930 17:27:52.547235 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:02 crc kubenswrapper[4688]: I0930 17:28:02.190215 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85b8f7c4f8-d477v" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.188742 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.189782 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerName="controller-manager" containerID="cri-o://8fca22d5ce5fbb61a7f38dc497985af48e252ec9bd2b6a30ca4157c47b48e37a" gracePeriod=30 Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.224789 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.225072 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerName="route-controller-manager" containerID="cri-o://89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119" gracePeriod=30 Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.361877 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerID="8fca22d5ce5fbb61a7f38dc497985af48e252ec9bd2b6a30ca4157c47b48e37a" exitCode=0 Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.361977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" event={"ID":"a7575a89-fbed-4028-a04f-2ab40e7a9081","Type":"ContainerDied","Data":"8fca22d5ce5fbb61a7f38dc497985af48e252ec9bd2b6a30ca4157c47b48e37a"} Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.637171 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.697439 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.772880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca\") pod \"a7575a89-fbed-4028-a04f-2ab40e7a9081\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.772940 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7\") pod \"a7575a89-fbed-4028-a04f-2ab40e7a9081\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.772969 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config\") pod \"a7575a89-fbed-4028-a04f-2ab40e7a9081\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.772984 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles\") pod \"a7575a89-fbed-4028-a04f-2ab40e7a9081\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.773085 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert\") pod \"a7575a89-fbed-4028-a04f-2ab40e7a9081\" (UID: \"a7575a89-fbed-4028-a04f-2ab40e7a9081\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.773948 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7575a89-fbed-4028-a04f-2ab40e7a9081" (UID: "a7575a89-fbed-4028-a04f-2ab40e7a9081"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.774299 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a7575a89-fbed-4028-a04f-2ab40e7a9081" (UID: "a7575a89-fbed-4028-a04f-2ab40e7a9081"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.774431 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config" (OuterVolumeSpecName: "config") pod "a7575a89-fbed-4028-a04f-2ab40e7a9081" (UID: "a7575a89-fbed-4028-a04f-2ab40e7a9081"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.781459 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7575a89-fbed-4028-a04f-2ab40e7a9081" (UID: "a7575a89-fbed-4028-a04f-2ab40e7a9081"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.781584 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7" (OuterVolumeSpecName: "kube-api-access-w4wm7") pod "a7575a89-fbed-4028-a04f-2ab40e7a9081" (UID: "a7575a89-fbed-4028-a04f-2ab40e7a9081"). InnerVolumeSpecName "kube-api-access-w4wm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.873949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca\") pod \"4f4fc367-8dbb-4df3-9fa6-564810f36316\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874040 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d78qt\" (UniqueName: \"kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt\") pod \"4f4fc367-8dbb-4df3-9fa6-564810f36316\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874125 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert\") pod \"4f4fc367-8dbb-4df3-9fa6-564810f36316\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874201 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config\") pod \"4f4fc367-8dbb-4df3-9fa6-564810f36316\" (UID: \"4f4fc367-8dbb-4df3-9fa6-564810f36316\") " Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874562 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7575a89-fbed-4028-a04f-2ab40e7a9081-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874600 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874609 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/a7575a89-fbed-4028-a04f-2ab40e7a9081-kube-api-access-w4wm7\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874619 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874628 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7575a89-fbed-4028-a04f-2ab40e7a9081-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.874866 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f4fc367-8dbb-4df3-9fa6-564810f36316" (UID: "4f4fc367-8dbb-4df3-9fa6-564810f36316"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.875051 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config" (OuterVolumeSpecName: "config") pod "4f4fc367-8dbb-4df3-9fa6-564810f36316" (UID: "4f4fc367-8dbb-4df3-9fa6-564810f36316"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.878706 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt" (OuterVolumeSpecName: "kube-api-access-d78qt") pod "4f4fc367-8dbb-4df3-9fa6-564810f36316" (UID: "4f4fc367-8dbb-4df3-9fa6-564810f36316"). InnerVolumeSpecName "kube-api-access-d78qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.878977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f4fc367-8dbb-4df3-9fa6-564810f36316" (UID: "4f4fc367-8dbb-4df3-9fa6-564810f36316"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.975604 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.975654 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d78qt\" (UniqueName: \"kubernetes.io/projected/4f4fc367-8dbb-4df3-9fa6-564810f36316-kube-api-access-d78qt\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.975665 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4fc367-8dbb-4df3-9fa6-564810f36316-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:08 crc kubenswrapper[4688]: I0930 17:28:08.975674 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4fc367-8dbb-4df3-9fa6-564810f36316-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.372091 4688 generic.go:334] "Generic (PLEG): container finished" podID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerID="89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119" exitCode=0 Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.372179 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.372179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" event={"ID":"4f4fc367-8dbb-4df3-9fa6-564810f36316","Type":"ContainerDied","Data":"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119"} Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.373755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v" event={"ID":"4f4fc367-8dbb-4df3-9fa6-564810f36316","Type":"ContainerDied","Data":"a6583c45533ad0cc83fa6d229d9a67391e6f198c866816df86bcb2b7e68945f2"} Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.373787 4688 scope.go:117] "RemoveContainer" containerID="89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.376038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" event={"ID":"a7575a89-fbed-4028-a04f-2ab40e7a9081","Type":"ContainerDied","Data":"1738f9cdba19375f304c7d2a61983ff5010c50541e5b816ec59f648a9af84d3c"} Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.376109 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qkj9n" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.396949 4688 scope.go:117] "RemoveContainer" containerID="89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119" Sep 30 17:28:09 crc kubenswrapper[4688]: E0930 17:28:09.397488 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119\": container with ID starting with 89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119 not found: ID does not exist" containerID="89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.397532 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119"} err="failed to get container status \"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119\": rpc error: code = NotFound desc = could not find container \"89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119\": container with ID starting with 89637d513a9eb6e1e8ba855f35243b8e04bbca84bb30a818f5c9d8b50e12a119 not found: ID does not exist" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.397560 4688 scope.go:117] "RemoveContainer" containerID="8fca22d5ce5fbb61a7f38dc497985af48e252ec9bd2b6a30ca4157c47b48e37a" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.417208 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.437986 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q828v"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.461002 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" path="/var/lib/kubelet/pods/4f4fc367-8dbb-4df3-9fa6-564810f36316/volumes" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.462554 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.463059 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qkj9n"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.731523 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59cbbb5d67-57zkp"] Sep 30 17:28:09 crc kubenswrapper[4688]: E0930 17:28:09.731861 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerName="controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.731883 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerName="controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: E0930 17:28:09.731902 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerName="route-controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.731911 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerName="route-controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.732069 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" containerName="controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.732094 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4fc367-8dbb-4df3-9fa6-564810f36316" containerName="route-controller-manager" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.732844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.736067 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.736494 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.737098 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.737472 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.737504 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.738724 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.743286 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.743892 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.747532 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59cbbb5d67-57zkp"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.748674 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.749522 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.749722 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.750020 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.750355 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.750355 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.751303 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.772946 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv"] Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.888475 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-config\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.888865 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39e39eb-b65e-49a7-a24b-16d2271876bd-serving-cert\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.889158 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-proxy-ca-bundles\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.889240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-client-ca\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.890502 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx982\" (UniqueName: \"kubernetes.io/projected/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-kube-api-access-gx982\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.890599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-config\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.890650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-client-ca\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.890687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzrp\" (UniqueName: \"kubernetes.io/projected/d39e39eb-b65e-49a7-a24b-16d2271876bd-kube-api-access-qlzrp\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.890729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-serving-cert\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992604 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-config\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992674 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39e39eb-b65e-49a7-a24b-16d2271876bd-serving-cert\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992709 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-proxy-ca-bundles\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992730 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-client-ca\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992769 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx982\" (UniqueName: \"kubernetes.io/projected/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-kube-api-access-gx982\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-config\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-client-ca\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzrp\" (UniqueName: \"kubernetes.io/projected/d39e39eb-b65e-49a7-a24b-16d2271876bd-kube-api-access-qlzrp\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.992852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-serving-cert\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.994277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-client-ca\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.995859 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-config\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.997768 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-serving-cert\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.998080 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d39e39eb-b65e-49a7-a24b-16d2271876bd-config\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.998181 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-client-ca\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:09 crc kubenswrapper[4688]: I0930 17:28:09.998740 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d39e39eb-b65e-49a7-a24b-16d2271876bd-serving-cert\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.001723 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-proxy-ca-bundles\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.016403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzrp\" (UniqueName: \"kubernetes.io/projected/d39e39eb-b65e-49a7-a24b-16d2271876bd-kube-api-access-qlzrp\") pod \"route-controller-manager-6bf7df444d-wpnkv\" (UID: \"d39e39eb-b65e-49a7-a24b-16d2271876bd\") " pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.019463 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx982\" (UniqueName: \"kubernetes.io/projected/3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1-kube-api-access-gx982\") pod \"controller-manager-59cbbb5d67-57zkp\" (UID: \"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1\") " pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.054169 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.073418 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.557690 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59cbbb5d67-57zkp"] Sep 30 17:28:10 crc kubenswrapper[4688]: I0930 17:28:10.626904 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv"] Sep 30 17:28:10 crc kubenswrapper[4688]: W0930 17:28:10.650095 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd39e39eb_b65e_49a7_a24b_16d2271876bd.slice/crio-745adaaa41a8dfdd4e919ca06c2dde5419a1ce60a20be1eb6e12e61850ebdeab WatchSource:0}: Error finding container 745adaaa41a8dfdd4e919ca06c2dde5419a1ce60a20be1eb6e12e61850ebdeab: Status 404 returned error can't find the container with id 745adaaa41a8dfdd4e919ca06c2dde5419a1ce60a20be1eb6e12e61850ebdeab Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.393608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" event={"ID":"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1","Type":"ContainerStarted","Data":"9fb6c13e496e1f056bb023ab9ed2ec26a360886f38469496beceeda10127ba31"} Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.394005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" event={"ID":"3b2a1cef-a9f2-40d7-8c97-0bf4101d3bc1","Type":"ContainerStarted","Data":"17527e2d272ee8fcdb257947afa94d4ddcbcbf5534ec4e2c133c6175ec506d9b"} Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.394028 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.396229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" event={"ID":"d39e39eb-b65e-49a7-a24b-16d2271876bd","Type":"ContainerStarted","Data":"7f9e5bbe6e87c1355cf1d53e038d4a66375c0e5f09046bc84243ff87519d7927"} Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.396284 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" event={"ID":"d39e39eb-b65e-49a7-a24b-16d2271876bd","Type":"ContainerStarted","Data":"745adaaa41a8dfdd4e919ca06c2dde5419a1ce60a20be1eb6e12e61850ebdeab"} Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.397113 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.402892 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.403187 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.425870 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59cbbb5d67-57zkp" podStartSLOduration=3.4258430029999998 podStartE2EDuration="3.425843003s" podCreationTimestamp="2025-09-30 17:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:28:11.421169502 +0000 UTC m=+748.716607080" watchObservedRunningTime="2025-09-30 17:28:11.425843003 +0000 UTC m=+748.721280571" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.441510 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7575a89-fbed-4028-a04f-2ab40e7a9081" path="/var/lib/kubelet/pods/a7575a89-fbed-4028-a04f-2ab40e7a9081/volumes" Sep 30 17:28:11 crc kubenswrapper[4688]: I0930 17:28:11.452579 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bf7df444d-wpnkv" podStartSLOduration=3.452540122 podStartE2EDuration="3.452540122s" podCreationTimestamp="2025-09-30 17:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:28:11.446678221 +0000 UTC m=+748.742115809" watchObservedRunningTime="2025-09-30 17:28:11.452540122 +0000 UTC m=+748.747977680" Sep 30 17:28:20 crc kubenswrapper[4688]: I0930 17:28:20.517851 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 17:28:21 crc kubenswrapper[4688]: I0930 17:28:21.809560 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d94b69f58-xr5pv" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.545832 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.546126 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.550372 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.551786 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.553719 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cgbhd" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.554248 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.555127 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fs79f"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.558125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.559731 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.561166 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.573326 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.646439 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pfrp4"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.647842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.658526 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.658826 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.659459 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.659621 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-28vcd" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.663742 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5d688f5ffc-5l9vw"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.665522 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.667743 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.680543 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-5l9vw"] Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692540 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce858a63-686b-4a1c-98a6-de91bb6992e1-cert\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-startup\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692708 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-conf\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-sockets\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsz4d\" (UniqueName: \"kubernetes.io/projected/4b5cdacf-945b-48ab-8105-9aed7ba409e2-kube-api-access-fsz4d\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692823 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics-certs\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-reloader\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.692876 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfg22\" (UniqueName: \"kubernetes.io/projected/ce858a63-686b-4a1c-98a6-de91bb6992e1-kube-api-access-xfg22\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794294 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-cert\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794804 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgzc\" (UniqueName: \"kubernetes.io/projected/e73cf014-002c-4642-815a-8a77f2ec12ee-kube-api-access-jsgzc\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-reloader\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfg22\" (UniqueName: \"kubernetes.io/projected/ce858a63-686b-4a1c-98a6-de91bb6992e1-kube-api-access-xfg22\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce858a63-686b-4a1c-98a6-de91bb6992e1-cert\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794949 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9936dd27-23ea-412c-98b4-dab61a77b58b-metallb-excludel2\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.794984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/9936dd27-23ea-412c-98b4-dab61a77b58b-kube-api-access-bzlds\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795039 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-metrics-certs\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-startup\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-conf\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-sockets\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-metrics-certs\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795223 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsz4d\" (UniqueName: \"kubernetes.io/projected/4b5cdacf-945b-48ab-8105-9aed7ba409e2-kube-api-access-fsz4d\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.795246 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics-certs\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.796498 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-reloader\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.796747 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-conf\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.796768 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-sockets\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.796907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.797617 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4b5cdacf-945b-48ab-8105-9aed7ba409e2-frr-startup\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.802374 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce858a63-686b-4a1c-98a6-de91bb6992e1-cert\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.804140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b5cdacf-945b-48ab-8105-9aed7ba409e2-metrics-certs\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.812013 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfg22\" (UniqueName: \"kubernetes.io/projected/ce858a63-686b-4a1c-98a6-de91bb6992e1-kube-api-access-xfg22\") pod \"frr-k8s-webhook-server-5478bdb765-v9pk8\" (UID: \"ce858a63-686b-4a1c-98a6-de91bb6992e1\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.819129 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsz4d\" (UniqueName: \"kubernetes.io/projected/4b5cdacf-945b-48ab-8105-9aed7ba409e2-kube-api-access-fsz4d\") pod \"frr-k8s-fs79f\" (UID: \"4b5cdacf-945b-48ab-8105-9aed7ba409e2\") " pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.887236 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896593 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9936dd27-23ea-412c-98b4-dab61a77b58b-metallb-excludel2\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896655 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/9936dd27-23ea-412c-98b4-dab61a77b58b-kube-api-access-bzlds\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896675 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896704 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-metrics-certs\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896751 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-metrics-certs\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-cert\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.896794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgzc\" (UniqueName: \"kubernetes.io/projected/e73cf014-002c-4642-815a-8a77f2ec12ee-kube-api-access-jsgzc\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: E0930 17:28:22.897049 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 17:28:22 crc kubenswrapper[4688]: E0930 17:28:22.897129 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist podName:9936dd27-23ea-412c-98b4-dab61a77b58b nodeName:}" failed. No retries permitted until 2025-09-30 17:28:23.397104522 +0000 UTC m=+760.692542090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist") pod "speaker-pfrp4" (UID: "9936dd27-23ea-412c-98b4-dab61a77b58b") : secret "metallb-memberlist" not found Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.898188 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9936dd27-23ea-412c-98b4-dab61a77b58b-metallb-excludel2\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.901093 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.901673 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.901898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-metrics-certs\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.902687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-metrics-certs\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.917614 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e73cf014-002c-4642-815a-8a77f2ec12ee-cert\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.923659 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgzc\" (UniqueName: \"kubernetes.io/projected/e73cf014-002c-4642-815a-8a77f2ec12ee-kube-api-access-jsgzc\") pod \"controller-5d688f5ffc-5l9vw\" (UID: \"e73cf014-002c-4642-815a-8a77f2ec12ee\") " pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.925501 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlds\" (UniqueName: \"kubernetes.io/projected/9936dd27-23ea-412c-98b4-dab61a77b58b-kube-api-access-bzlds\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:22 crc kubenswrapper[4688]: I0930 17:28:22.986519 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:23 crc kubenswrapper[4688]: I0930 17:28:23.402908 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8"] Sep 30 17:28:23 crc kubenswrapper[4688]: I0930 17:28:23.404074 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:23 crc kubenswrapper[4688]: E0930 17:28:23.404325 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 17:28:23 crc kubenswrapper[4688]: E0930 17:28:23.404440 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist podName:9936dd27-23ea-412c-98b4-dab61a77b58b nodeName:}" failed. No retries permitted until 2025-09-30 17:28:24.404413536 +0000 UTC m=+761.699851094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist") pod "speaker-pfrp4" (UID: "9936dd27-23ea-412c-98b4-dab61a77b58b") : secret "metallb-memberlist" not found Sep 30 17:28:23 crc kubenswrapper[4688]: W0930 17:28:23.412309 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce858a63_686b_4a1c_98a6_de91bb6992e1.slice/crio-7ff18337aca0a37646ec9ebb052ae126bea0f8eb77314d84ab8f05a17eec3628 WatchSource:0}: Error finding container 7ff18337aca0a37646ec9ebb052ae126bea0f8eb77314d84ab8f05a17eec3628: Status 404 returned error can't find the container with id 7ff18337aca0a37646ec9ebb052ae126bea0f8eb77314d84ab8f05a17eec3628 Sep 30 17:28:23 crc kubenswrapper[4688]: I0930 17:28:23.465897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"2ca31f3a0366acae7d72f494ee4704eee3142d2f539bbabc1f59a75351b2ec1d"} Sep 30 17:28:23 crc kubenswrapper[4688]: I0930 17:28:23.467065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" event={"ID":"ce858a63-686b-4a1c-98a6-de91bb6992e1","Type":"ContainerStarted","Data":"7ff18337aca0a37646ec9ebb052ae126bea0f8eb77314d84ab8f05a17eec3628"} Sep 30 17:28:23 crc kubenswrapper[4688]: I0930 17:28:23.489451 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-5l9vw"] Sep 30 17:28:23 crc kubenswrapper[4688]: W0930 17:28:23.491614 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode73cf014_002c_4642_815a_8a77f2ec12ee.slice/crio-772a120c22e7b8f3de8f199e422333bc7551c6e28f328ccd2da7594d09d8fe47 WatchSource:0}: Error finding container 772a120c22e7b8f3de8f199e422333bc7551c6e28f328ccd2da7594d09d8fe47: Status 404 returned error can't find the container with id 772a120c22e7b8f3de8f199e422333bc7551c6e28f328ccd2da7594d09d8fe47 Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.417350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.424446 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9936dd27-23ea-412c-98b4-dab61a77b58b-memberlist\") pod \"speaker-pfrp4\" (UID: \"9936dd27-23ea-412c-98b4-dab61a77b58b\") " pod="metallb-system/speaker-pfrp4" Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.475565 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pfrp4" Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.488988 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-5l9vw" event={"ID":"e73cf014-002c-4642-815a-8a77f2ec12ee","Type":"ContainerStarted","Data":"871189cc123b099622e91d49be9d6349cd4fe14b7346a60944431a9f246d1ce1"} Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.489181 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-5l9vw" event={"ID":"e73cf014-002c-4642-815a-8a77f2ec12ee","Type":"ContainerStarted","Data":"cd303d11546fee13fdb7d75275aef45ae5c2fdefdeab0d738d636ba4f4dd104c"} Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.489552 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.489638 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-5l9vw" event={"ID":"e73cf014-002c-4642-815a-8a77f2ec12ee","Type":"ContainerStarted","Data":"772a120c22e7b8f3de8f199e422333bc7551c6e28f328ccd2da7594d09d8fe47"} Sep 30 17:28:24 crc kubenswrapper[4688]: I0930 17:28:24.518953 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5d688f5ffc-5l9vw" podStartSLOduration=2.518924214 podStartE2EDuration="2.518924214s" podCreationTimestamp="2025-09-30 17:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:28:24.516955773 +0000 UTC m=+761.812393351" watchObservedRunningTime="2025-09-30 17:28:24.518924214 +0000 UTC m=+761.814361772" Sep 30 17:28:25 crc kubenswrapper[4688]: I0930 17:28:25.498682 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfrp4" event={"ID":"9936dd27-23ea-412c-98b4-dab61a77b58b","Type":"ContainerStarted","Data":"33353dfbf77eac6823c379e6cd5aad8f3fca97c531389a2390f03bcdea8917cb"} Sep 30 17:28:25 crc kubenswrapper[4688]: I0930 17:28:25.499236 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfrp4" event={"ID":"9936dd27-23ea-412c-98b4-dab61a77b58b","Type":"ContainerStarted","Data":"e54ad4e00e1b48f3ec4ca8aae37068dfbacbd13dd3622853671f40057dcfd221"} Sep 30 17:28:25 crc kubenswrapper[4688]: I0930 17:28:25.499253 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pfrp4" event={"ID":"9936dd27-23ea-412c-98b4-dab61a77b58b","Type":"ContainerStarted","Data":"fad002456588b7c14b431028c8bae649ac6773cdc46c3608b6ca1e719a65a813"} Sep 30 17:28:25 crc kubenswrapper[4688]: I0930 17:28:25.499617 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pfrp4" Sep 30 17:28:25 crc kubenswrapper[4688]: I0930 17:28:25.524529 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pfrp4" podStartSLOduration=3.524509348 podStartE2EDuration="3.524509348s" podCreationTimestamp="2025-09-30 17:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:28:25.520972606 +0000 UTC m=+762.816410164" watchObservedRunningTime="2025-09-30 17:28:25.524509348 +0000 UTC m=+762.819946906" Sep 30 17:28:31 crc kubenswrapper[4688]: I0930 17:28:31.548228 4688 generic.go:334] "Generic (PLEG): container finished" podID="4b5cdacf-945b-48ab-8105-9aed7ba409e2" containerID="fb73c03f6dd3387f744036efde1bf4d15ac30870e35984a6cdc470fa3115e3d7" exitCode=0 Sep 30 17:28:31 crc kubenswrapper[4688]: I0930 17:28:31.548354 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerDied","Data":"fb73c03f6dd3387f744036efde1bf4d15ac30870e35984a6cdc470fa3115e3d7"} Sep 30 17:28:31 crc kubenswrapper[4688]: I0930 17:28:31.552608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" event={"ID":"ce858a63-686b-4a1c-98a6-de91bb6992e1","Type":"ContainerStarted","Data":"c2ef61d1b0781859ec93f041ee3c0a1d0d353df75e7994ebea57723ac6f9fa33"} Sep 30 17:28:31 crc kubenswrapper[4688]: I0930 17:28:31.552900 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:31 crc kubenswrapper[4688]: I0930 17:28:31.603398 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" podStartSLOduration=1.6958935149999999 podStartE2EDuration="9.603376375s" podCreationTimestamp="2025-09-30 17:28:22 +0000 UTC" firstStartedPulling="2025-09-30 17:28:23.415238786 +0000 UTC m=+760.710676344" lastFinishedPulling="2025-09-30 17:28:31.322721656 +0000 UTC m=+768.618159204" observedRunningTime="2025-09-30 17:28:31.597535314 +0000 UTC m=+768.892972892" watchObservedRunningTime="2025-09-30 17:28:31.603376375 +0000 UTC m=+768.898813933" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.397423 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.398726 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.416201 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.557708 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.557777 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbfk\" (UniqueName: \"kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.557806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.561220 4688 generic.go:334] "Generic (PLEG): container finished" podID="4b5cdacf-945b-48ab-8105-9aed7ba409e2" containerID="baa34594296ae6d838cc615e95dd859a7da65c554f4b77e83be67f0a1dbd32c0" exitCode=0 Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.561335 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerDied","Data":"baa34594296ae6d838cc615e95dd859a7da65c554f4b77e83be67f0a1dbd32c0"} Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.659712 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmbfk\" (UniqueName: \"kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.660222 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.660317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.660894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.660982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.696605 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmbfk\" (UniqueName: \"kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk\") pod \"redhat-operators-89xcc\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:32 crc kubenswrapper[4688]: I0930 17:28:32.714364 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.227749 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.572960 4688 generic.go:334] "Generic (PLEG): container finished" podID="4b5cdacf-945b-48ab-8105-9aed7ba409e2" containerID="99b0beb37c7658e81226e0bf24e59add76b19685644aa7d3d5d1d609badc8c29" exitCode=0 Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.573062 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerDied","Data":"99b0beb37c7658e81226e0bf24e59add76b19685644aa7d3d5d1d609badc8c29"} Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.575880 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerID="23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e" exitCode=0 Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.575912 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerDied","Data":"23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e"} Sep 30 17:28:33 crc kubenswrapper[4688]: I0930 17:28:33.575931 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerStarted","Data":"9f8302abbb1ea817022a2617b4d390b52848d812ca1c80df9bb5acc2db35b71a"} Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.481891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pfrp4" Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.591612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"8f7618e3acb7a2da28bcca29842348aa5be04ed9556981c2bfa350f219c9ab50"} Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.591658 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"a146b53d3f5b64c5df7857213aec7defda97a955a13f4db479dfc3b24417440f"} Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.591669 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"f202ac4a539312a7b0251100bf848111d1ec7b95ac4bae71e0f58c683cbd977f"} Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.591677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"5d4d911d99e022cda9066e7d143d4d0f37cd56659b2bfaf9b84ac423d59cec25"} Sep 30 17:28:34 crc kubenswrapper[4688]: I0930 17:28:34.591690 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"c166166262155521cafb433b68fe016f154aa1f3b89f0e4ffe4506269c58dea0"} Sep 30 17:28:35 crc kubenswrapper[4688]: I0930 17:28:35.605771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs79f" event={"ID":"4b5cdacf-945b-48ab-8105-9aed7ba409e2","Type":"ContainerStarted","Data":"7c01a8725b57f01a5363837b2847c3940d844865f41680c172d77b0111880cfd"} Sep 30 17:28:35 crc kubenswrapper[4688]: I0930 17:28:35.606314 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:35 crc kubenswrapper[4688]: I0930 17:28:35.608557 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerID="72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742" exitCode=0 Sep 30 17:28:35 crc kubenswrapper[4688]: I0930 17:28:35.608602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerDied","Data":"72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742"} Sep 30 17:28:35 crc kubenswrapper[4688]: I0930 17:28:35.639527 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fs79f" podStartSLOduration=5.480121141 podStartE2EDuration="13.639501488s" podCreationTimestamp="2025-09-30 17:28:22 +0000 UTC" firstStartedPulling="2025-09-30 17:28:23.119562778 +0000 UTC m=+760.415000336" lastFinishedPulling="2025-09-30 17:28:31.278943125 +0000 UTC m=+768.574380683" observedRunningTime="2025-09-30 17:28:35.634481058 +0000 UTC m=+772.929918636" watchObservedRunningTime="2025-09-30 17:28:35.639501488 +0000 UTC m=+772.934939046" Sep 30 17:28:36 crc kubenswrapper[4688]: I0930 17:28:36.619513 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerStarted","Data":"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da"} Sep 30 17:28:36 crc kubenswrapper[4688]: I0930 17:28:36.643391 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-89xcc" podStartSLOduration=1.872104766 podStartE2EDuration="4.643368767s" podCreationTimestamp="2025-09-30 17:28:32 +0000 UTC" firstStartedPulling="2025-09-30 17:28:33.577903397 +0000 UTC m=+770.873340955" lastFinishedPulling="2025-09-30 17:28:36.349167398 +0000 UTC m=+773.644604956" observedRunningTime="2025-09-30 17:28:36.639925288 +0000 UTC m=+773.935362846" watchObservedRunningTime="2025-09-30 17:28:36.643368767 +0000 UTC m=+773.938806325" Sep 30 17:28:37 crc kubenswrapper[4688]: I0930 17:28:37.902642 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:37 crc kubenswrapper[4688]: I0930 17:28:37.950680 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.142990 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.144254 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.146896 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.146970 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nv2jj" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.147562 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.155084 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.212169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8s8\" (UniqueName: \"kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8\") pod \"openstack-operator-index-msndw\" (UID: \"99a504bc-95c8-475f-b3ec-c1d2026e95c7\") " pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.314487 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8s8\" (UniqueName: \"kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8\") pod \"openstack-operator-index-msndw\" (UID: \"99a504bc-95c8-475f-b3ec-c1d2026e95c7\") " pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.342728 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8s8\" (UniqueName: \"kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8\") pod \"openstack-operator-index-msndw\" (UID: \"99a504bc-95c8-475f-b3ec-c1d2026e95c7\") " pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.460727 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:40 crc kubenswrapper[4688]: I0930 17:28:40.950857 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:40 crc kubenswrapper[4688]: W0930 17:28:40.965935 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99a504bc_95c8_475f_b3ec_c1d2026e95c7.slice/crio-386f7e2816c5393dfe6828f1cff89283411e85f14c8266f11539b95de7476cbd WatchSource:0}: Error finding container 386f7e2816c5393dfe6828f1cff89283411e85f14c8266f11539b95de7476cbd: Status 404 returned error can't find the container with id 386f7e2816c5393dfe6828f1cff89283411e85f14c8266f11539b95de7476cbd Sep 30 17:28:41 crc kubenswrapper[4688]: I0930 17:28:41.661430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msndw" event={"ID":"99a504bc-95c8-475f-b3ec-c1d2026e95c7","Type":"ContainerStarted","Data":"386f7e2816c5393dfe6828f1cff89283411e85f14c8266f11539b95de7476cbd"} Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.714996 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.715076 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.794944 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.892734 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-v9pk8" Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.944227 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.945977 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.968198 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:28:42 crc kubenswrapper[4688]: I0930 17:28:42.990566 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5d688f5ffc-5l9vw" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.066645 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.067153 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.067267 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdqh\" (UniqueName: \"kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.169213 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.169522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdqh\" (UniqueName: \"kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.169559 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.170018 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.170246 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.198933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdqh\" (UniqueName: \"kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh\") pod \"certified-operators-kfp4r\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.334879 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.729338 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:43 crc kubenswrapper[4688]: I0930 17:28:43.919200 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:28:43 crc kubenswrapper[4688]: W0930 17:28:43.949149 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe0dfb7c_133c_4a7d_b576_6e9ff7a34aed.slice/crio-c5f2e9679e0acf84248d60b2da8e90b4c6b16e1b71b641c590a0ebdda86fbf1a WatchSource:0}: Error finding container c5f2e9679e0acf84248d60b2da8e90b4c6b16e1b71b641c590a0ebdda86fbf1a: Status 404 returned error can't find the container with id c5f2e9679e0acf84248d60b2da8e90b4c6b16e1b71b641c590a0ebdda86fbf1a Sep 30 17:28:44 crc kubenswrapper[4688]: I0930 17:28:44.686042 4688 generic.go:334] "Generic (PLEG): container finished" podID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerID="7e99e0e9e399c35169c89cbfcea428f5dad9f2052e76d0588a69b98f848ccbf4" exitCode=0 Sep 30 17:28:44 crc kubenswrapper[4688]: I0930 17:28:44.686160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerDied","Data":"7e99e0e9e399c35169c89cbfcea428f5dad9f2052e76d0588a69b98f848ccbf4"} Sep 30 17:28:44 crc kubenswrapper[4688]: I0930 17:28:44.686483 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerStarted","Data":"c5f2e9679e0acf84248d60b2da8e90b4c6b16e1b71b641c590a0ebdda86fbf1a"} Sep 30 17:28:45 crc kubenswrapper[4688]: I0930 17:28:45.534442 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.141909 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f55sl"] Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.142753 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.154829 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f55sl"] Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.260352 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xcsb\" (UniqueName: \"kubernetes.io/projected/0d838e78-fbd6-46d0-a298-efa17c407500-kube-api-access-5xcsb\") pod \"openstack-operator-index-f55sl\" (UID: \"0d838e78-fbd6-46d0-a298-efa17c407500\") " pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.362554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xcsb\" (UniqueName: \"kubernetes.io/projected/0d838e78-fbd6-46d0-a298-efa17c407500-kube-api-access-5xcsb\") pod \"openstack-operator-index-f55sl\" (UID: \"0d838e78-fbd6-46d0-a298-efa17c407500\") " pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.388534 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xcsb\" (UniqueName: \"kubernetes.io/projected/0d838e78-fbd6-46d0-a298-efa17c407500-kube-api-access-5xcsb\") pod \"openstack-operator-index-f55sl\" (UID: \"0d838e78-fbd6-46d0-a298-efa17c407500\") " pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:46 crc kubenswrapper[4688]: I0930 17:28:46.472769 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.131644 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.132365 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-89xcc" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="registry-server" containerID="cri-o://eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da" gracePeriod=2 Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.233152 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f55sl"] Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.662282 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.706082 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f55sl" event={"ID":"0d838e78-fbd6-46d0-a298-efa17c407500","Type":"ContainerStarted","Data":"c196d874578e2fc9c04f9f50fe07ff7361f9325fa7bce519e9eb702d3e2706ad"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.706145 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f55sl" event={"ID":"0d838e78-fbd6-46d0-a298-efa17c407500","Type":"ContainerStarted","Data":"c22b6b7803d3b2a8d724783b5bdc372fae6aebf8707a0d5cc5fefd1999a3629b"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.708297 4688 generic.go:334] "Generic (PLEG): container finished" podID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerID="5ffd902a981b2e81b4d05bd09abc3b6526fe17b60df6d6b536a1c9007a781620" exitCode=0 Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.708407 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerDied","Data":"5ffd902a981b2e81b4d05bd09abc3b6526fe17b60df6d6b536a1c9007a781620"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.710860 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerID="eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da" exitCode=0 Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.710944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerDied","Data":"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.710944 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89xcc" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.710969 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89xcc" event={"ID":"b5e0a66c-1e31-45b3-a514-85484cf2d0ef","Type":"ContainerDied","Data":"9f8302abbb1ea817022a2617b4d390b52848d812ca1c80df9bb5acc2db35b71a"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.711000 4688 scope.go:117] "RemoveContainer" containerID="eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.712539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msndw" event={"ID":"99a504bc-95c8-475f-b3ec-c1d2026e95c7","Type":"ContainerStarted","Data":"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27"} Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.712699 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-msndw" podUID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" containerName="registry-server" containerID="cri-o://d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27" gracePeriod=2 Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.744852 4688 scope.go:117] "RemoveContainer" containerID="72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.746177 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f55sl" podStartSLOduration=1.627673021 podStartE2EDuration="1.746153591s" podCreationTimestamp="2025-09-30 17:28:46 +0000 UTC" firstStartedPulling="2025-09-30 17:28:47.258992818 +0000 UTC m=+784.554430376" lastFinishedPulling="2025-09-30 17:28:47.377473388 +0000 UTC m=+784.672910946" observedRunningTime="2025-09-30 17:28:47.724973474 +0000 UTC m=+785.020411052" watchObservedRunningTime="2025-09-30 17:28:47.746153591 +0000 UTC m=+785.041591149" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.764257 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-msndw" podStartSLOduration=1.9132530079999999 podStartE2EDuration="7.764233198s" podCreationTimestamp="2025-09-30 17:28:40 +0000 UTC" firstStartedPulling="2025-09-30 17:28:41.019221045 +0000 UTC m=+778.314658603" lastFinishedPulling="2025-09-30 17:28:46.870201245 +0000 UTC m=+784.165638793" observedRunningTime="2025-09-30 17:28:47.761947219 +0000 UTC m=+785.057384797" watchObservedRunningTime="2025-09-30 17:28:47.764233198 +0000 UTC m=+785.059670756" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.797278 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content\") pod \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.797418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmbfk\" (UniqueName: \"kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk\") pod \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.797491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities\") pod \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\" (UID: \"b5e0a66c-1e31-45b3-a514-85484cf2d0ef\") " Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.799707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities" (OuterVolumeSpecName: "utilities") pod "b5e0a66c-1e31-45b3-a514-85484cf2d0ef" (UID: "b5e0a66c-1e31-45b3-a514-85484cf2d0ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.803872 4688 scope.go:117] "RemoveContainer" containerID="23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.805709 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk" (OuterVolumeSpecName: "kube-api-access-kmbfk") pod "b5e0a66c-1e31-45b3-a514-85484cf2d0ef" (UID: "b5e0a66c-1e31-45b3-a514-85484cf2d0ef"). InnerVolumeSpecName "kube-api-access-kmbfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.818705 4688 scope.go:117] "RemoveContainer" containerID="eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da" Sep 30 17:28:47 crc kubenswrapper[4688]: E0930 17:28:47.819449 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da\": container with ID starting with eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da not found: ID does not exist" containerID="eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.819520 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da"} err="failed to get container status \"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da\": rpc error: code = NotFound desc = could not find container \"eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da\": container with ID starting with eb04573f59c9db3886d4808b59f94e191ba23018b7d2a35a9a0fd2247284d2da not found: ID does not exist" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.819598 4688 scope.go:117] "RemoveContainer" containerID="72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742" Sep 30 17:28:47 crc kubenswrapper[4688]: E0930 17:28:47.820106 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742\": container with ID starting with 72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742 not found: ID does not exist" containerID="72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.820168 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742"} err="failed to get container status \"72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742\": rpc error: code = NotFound desc = could not find container \"72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742\": container with ID starting with 72cf744d1a341b42a02084749045a04dedfa45a739ed97c8293f73dbcee1c742 not found: ID does not exist" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.820211 4688 scope.go:117] "RemoveContainer" containerID="23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e" Sep 30 17:28:47 crc kubenswrapper[4688]: E0930 17:28:47.820681 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e\": container with ID starting with 23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e not found: ID does not exist" containerID="23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.820723 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e"} err="failed to get container status \"23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e\": rpc error: code = NotFound desc = could not find container \"23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e\": container with ID starting with 23b17295a7a6e901ae3232414fc9b63150cf970edf9df65b27ee72fe285d599e not found: ID does not exist" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.877503 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5e0a66c-1e31-45b3-a514-85484cf2d0ef" (UID: "b5e0a66c-1e31-45b3-a514-85484cf2d0ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.925532 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.925584 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:47 crc kubenswrapper[4688]: I0930 17:28:47.925649 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmbfk\" (UniqueName: \"kubernetes.io/projected/b5e0a66c-1e31-45b3-a514-85484cf2d0ef-kube-api-access-kmbfk\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.056399 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.060125 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-89xcc"] Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.131076 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.229281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8s8\" (UniqueName: \"kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8\") pod \"99a504bc-95c8-475f-b3ec-c1d2026e95c7\" (UID: \"99a504bc-95c8-475f-b3ec-c1d2026e95c7\") " Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.233784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8" (OuterVolumeSpecName: "kube-api-access-lf8s8") pod "99a504bc-95c8-475f-b3ec-c1d2026e95c7" (UID: "99a504bc-95c8-475f-b3ec-c1d2026e95c7"). InnerVolumeSpecName "kube-api-access-lf8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.331739 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8s8\" (UniqueName: \"kubernetes.io/projected/99a504bc-95c8-475f-b3ec-c1d2026e95c7-kube-api-access-lf8s8\") on node \"crc\" DevicePath \"\"" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.727080 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerStarted","Data":"874644243f1fa55c90fae95d6fb56fc6ba3fd42608ebeab00ebfc75a0bdc98b0"} Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.729166 4688 generic.go:334] "Generic (PLEG): container finished" podID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" containerID="d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27" exitCode=0 Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.729224 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msndw" event={"ID":"99a504bc-95c8-475f-b3ec-c1d2026e95c7","Type":"ContainerDied","Data":"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27"} Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.729286 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msndw" event={"ID":"99a504bc-95c8-475f-b3ec-c1d2026e95c7","Type":"ContainerDied","Data":"386f7e2816c5393dfe6828f1cff89283411e85f14c8266f11539b95de7476cbd"} Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.729290 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msndw" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.729312 4688 scope.go:117] "RemoveContainer" containerID="d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.754051 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kfp4r" podStartSLOduration=3.7784327859999998 podStartE2EDuration="6.754027315s" podCreationTimestamp="2025-09-30 17:28:42 +0000 UTC" firstStartedPulling="2025-09-30 17:28:45.177062402 +0000 UTC m=+782.472499960" lastFinishedPulling="2025-09-30 17:28:48.152656921 +0000 UTC m=+785.448094489" observedRunningTime="2025-09-30 17:28:48.744245972 +0000 UTC m=+786.039683540" watchObservedRunningTime="2025-09-30 17:28:48.754027315 +0000 UTC m=+786.049464893" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.755619 4688 scope.go:117] "RemoveContainer" containerID="d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27" Sep 30 17:28:48 crc kubenswrapper[4688]: E0930 17:28:48.756374 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27\": container with ID starting with d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27 not found: ID does not exist" containerID="d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.756454 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27"} err="failed to get container status \"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27\": rpc error: code = NotFound desc = could not find container \"d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27\": container with ID starting with d32bbf4e5ecc0b039730d3e3c91e0b7750fc88bf4c4e3dc6af25296737e47d27 not found: ID does not exist" Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.773175 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:48 crc kubenswrapper[4688]: I0930 17:28:48.776726 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-msndw"] Sep 30 17:28:49 crc kubenswrapper[4688]: I0930 17:28:49.437955 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" path="/var/lib/kubelet/pods/99a504bc-95c8-475f-b3ec-c1d2026e95c7/volumes" Sep 30 17:28:49 crc kubenswrapper[4688]: I0930 17:28:49.439120 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" path="/var/lib/kubelet/pods/b5e0a66c-1e31-45b3-a514-85484cf2d0ef/volumes" Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.545804 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.546200 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.546245 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.546746 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.546795 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00" gracePeriod=600 Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.784379 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00" exitCode=0 Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.784416 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00"} Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.784889 4688 scope.go:117] "RemoveContainer" containerID="f0e3d987437b58b18ce72c96975b0b5cff9d653e36b573ea916dae6fdfa57434" Sep 30 17:28:52 crc kubenswrapper[4688]: I0930 17:28:52.908075 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fs79f" Sep 30 17:28:53 crc kubenswrapper[4688]: I0930 17:28:53.335364 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:53 crc kubenswrapper[4688]: I0930 17:28:53.336380 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:53 crc kubenswrapper[4688]: I0930 17:28:53.388788 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:53 crc kubenswrapper[4688]: I0930 17:28:53.792967 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556"} Sep 30 17:28:53 crc kubenswrapper[4688]: I0930 17:28:53.841803 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:28:56 crc kubenswrapper[4688]: I0930 17:28:56.473330 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:56 crc kubenswrapper[4688]: I0930 17:28:56.474223 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:56 crc kubenswrapper[4688]: I0930 17:28:56.509386 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:56 crc kubenswrapper[4688]: I0930 17:28:56.847356 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f55sl" Sep 30 17:28:58 crc kubenswrapper[4688]: I0930 17:28:58.134992 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:28:58 crc kubenswrapper[4688]: I0930 17:28:58.135616 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kfp4r" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="registry-server" containerID="cri-o://874644243f1fa55c90fae95d6fb56fc6ba3fd42608ebeab00ebfc75a0bdc98b0" gracePeriod=2 Sep 30 17:28:59 crc kubenswrapper[4688]: I0930 17:28:59.839431 4688 generic.go:334] "Generic (PLEG): container finished" podID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerID="874644243f1fa55c90fae95d6fb56fc6ba3fd42608ebeab00ebfc75a0bdc98b0" exitCode=0 Sep 30 17:28:59 crc kubenswrapper[4688]: I0930 17:28:59.839530 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerDied","Data":"874644243f1fa55c90fae95d6fb56fc6ba3fd42608ebeab00ebfc75a0bdc98b0"} Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.240107 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.332773 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content\") pod \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.332916 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities\") pod \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.333107 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdqh\" (UniqueName: \"kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh\") pod \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\" (UID: \"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed\") " Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.334050 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities" (OuterVolumeSpecName: "utilities") pod "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" (UID: "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.340438 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh" (OuterVolumeSpecName: "kube-api-access-jkdqh") pod "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" (UID: "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed"). InnerVolumeSpecName "kube-api-access-jkdqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390269 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m"] Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390620 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390639 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390660 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390668 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390682 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="extract-content" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390689 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="extract-content" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390700 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="extract-utilities" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390706 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="extract-utilities" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390717 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="extract-content" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390723 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="extract-content" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390731 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="extract-utilities" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390738 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="extract-utilities" Sep 30 17:29:00 crc kubenswrapper[4688]: E0930 17:29:00.390749 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390755 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390861 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390871 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a504bc-95c8-475f-b3ec-c1d2026e95c7" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.390884 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e0a66c-1e31-45b3-a514-85484cf2d0ef" containerName="registry-server" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.391602 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" (UID: "fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.392000 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.394737 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vqmpb" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.400740 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m"] Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.434458 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkdqh\" (UniqueName: \"kubernetes.io/projected/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-kube-api-access-jkdqh\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.434487 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.434498 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.536457 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqgn\" (UniqueName: \"kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.536636 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.536672 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.638522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.639059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.639123 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfqgn\" (UniqueName: \"kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.639296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.639677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.659437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfqgn\" (UniqueName: \"kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn\") pod \"36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.709663 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.852906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfp4r" event={"ID":"fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed","Type":"ContainerDied","Data":"c5f2e9679e0acf84248d60b2da8e90b4c6b16e1b71b641c590a0ebdda86fbf1a"} Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.852978 4688 scope.go:117] "RemoveContainer" containerID="874644243f1fa55c90fae95d6fb56fc6ba3fd42608ebeab00ebfc75a0bdc98b0" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.853022 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfp4r" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.880425 4688 scope.go:117] "RemoveContainer" containerID="5ffd902a981b2e81b4d05bd09abc3b6526fe17b60df6d6b536a1c9007a781620" Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.892696 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.896103 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kfp4r"] Sep 30 17:29:00 crc kubenswrapper[4688]: I0930 17:29:00.910057 4688 scope.go:117] "RemoveContainer" containerID="7e99e0e9e399c35169c89cbfcea428f5dad9f2052e76d0588a69b98f848ccbf4" Sep 30 17:29:01 crc kubenswrapper[4688]: I0930 17:29:01.129111 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m"] Sep 30 17:29:01 crc kubenswrapper[4688]: I0930 17:29:01.448385 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed" path="/var/lib/kubelet/pods/fe0dfb7c-133c-4a7d-b576-6e9ff7a34aed/volumes" Sep 30 17:29:01 crc kubenswrapper[4688]: I0930 17:29:01.861326 4688 generic.go:334] "Generic (PLEG): container finished" podID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerID="690b82c3c1127be29b7c9ac95ce95bd53a25cad67b4b0618f4b9c35cf21035e3" exitCode=0 Sep 30 17:29:01 crc kubenswrapper[4688]: I0930 17:29:01.861375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" event={"ID":"1aa09f7c-6338-4950-97d8-13e3627ce120","Type":"ContainerDied","Data":"690b82c3c1127be29b7c9ac95ce95bd53a25cad67b4b0618f4b9c35cf21035e3"} Sep 30 17:29:01 crc kubenswrapper[4688]: I0930 17:29:01.861433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" event={"ID":"1aa09f7c-6338-4950-97d8-13e3627ce120","Type":"ContainerStarted","Data":"cc3576952f59d4d3b9aab84cd55e6070e1e7c7c7ddceeece2facbe4814e05cec"} Sep 30 17:29:02 crc kubenswrapper[4688]: I0930 17:29:02.872042 4688 generic.go:334] "Generic (PLEG): container finished" podID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerID="45c76c015c5a1df96fb2337dc73d7c694f4c5879034232d5463d1326a2312be9" exitCode=0 Sep 30 17:29:02 crc kubenswrapper[4688]: I0930 17:29:02.872122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" event={"ID":"1aa09f7c-6338-4950-97d8-13e3627ce120","Type":"ContainerDied","Data":"45c76c015c5a1df96fb2337dc73d7c694f4c5879034232d5463d1326a2312be9"} Sep 30 17:29:03 crc kubenswrapper[4688]: I0930 17:29:03.883690 4688 generic.go:334] "Generic (PLEG): container finished" podID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerID="c1ef41cd3c6df83db75c2e7a1dfa400554ab1185df6327ed6dc9136738acafba" exitCode=0 Sep 30 17:29:03 crc kubenswrapper[4688]: I0930 17:29:03.883803 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" event={"ID":"1aa09f7c-6338-4950-97d8-13e3627ce120","Type":"ContainerDied","Data":"c1ef41cd3c6df83db75c2e7a1dfa400554ab1185df6327ed6dc9136738acafba"} Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.183917 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.318551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle\") pod \"1aa09f7c-6338-4950-97d8-13e3627ce120\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.318803 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util\") pod \"1aa09f7c-6338-4950-97d8-13e3627ce120\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.318868 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfqgn\" (UniqueName: \"kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn\") pod \"1aa09f7c-6338-4950-97d8-13e3627ce120\" (UID: \"1aa09f7c-6338-4950-97d8-13e3627ce120\") " Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.319854 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle" (OuterVolumeSpecName: "bundle") pod "1aa09f7c-6338-4950-97d8-13e3627ce120" (UID: "1aa09f7c-6338-4950-97d8-13e3627ce120"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.328567 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn" (OuterVolumeSpecName: "kube-api-access-tfqgn") pod "1aa09f7c-6338-4950-97d8-13e3627ce120" (UID: "1aa09f7c-6338-4950-97d8-13e3627ce120"). InnerVolumeSpecName "kube-api-access-tfqgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.339240 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util" (OuterVolumeSpecName: "util") pod "1aa09f7c-6338-4950-97d8-13e3627ce120" (UID: "1aa09f7c-6338-4950-97d8-13e3627ce120"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.421413 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.422240 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa09f7c-6338-4950-97d8-13e3627ce120-util\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.422383 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfqgn\" (UniqueName: \"kubernetes.io/projected/1aa09f7c-6338-4950-97d8-13e3627ce120-kube-api-access-tfqgn\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.900223 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" event={"ID":"1aa09f7c-6338-4950-97d8-13e3627ce120","Type":"ContainerDied","Data":"cc3576952f59d4d3b9aab84cd55e6070e1e7c7c7ddceeece2facbe4814e05cec"} Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.900693 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc3576952f59d4d3b9aab84cd55e6070e1e7c7c7ddceeece2facbe4814e05cec" Sep 30 17:29:05 crc kubenswrapper[4688]: I0930 17:29:05.900312 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.304031 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r"] Sep 30 17:29:11 crc kubenswrapper[4688]: E0930 17:29:11.304761 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="extract" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.304779 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="extract" Sep 30 17:29:11 crc kubenswrapper[4688]: E0930 17:29:11.304798 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="util" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.304805 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="util" Sep 30 17:29:11 crc kubenswrapper[4688]: E0930 17:29:11.304812 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="pull" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.304820 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="pull" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.304941 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa09f7c-6338-4950-97d8-13e3627ce120" containerName="extract" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.305614 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.307824 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dk827" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.342968 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r"] Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.410679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8xk\" (UniqueName: \"kubernetes.io/projected/0e55afb9-bd88-450e-bb6a-5cba368b3f05-kube-api-access-mt8xk\") pod \"openstack-operator-controller-operator-77647d9b49-6c75r\" (UID: \"0e55afb9-bd88-450e-bb6a-5cba368b3f05\") " pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.512937 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8xk\" (UniqueName: \"kubernetes.io/projected/0e55afb9-bd88-450e-bb6a-5cba368b3f05-kube-api-access-mt8xk\") pod \"openstack-operator-controller-operator-77647d9b49-6c75r\" (UID: \"0e55afb9-bd88-450e-bb6a-5cba368b3f05\") " pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.535459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8xk\" (UniqueName: \"kubernetes.io/projected/0e55afb9-bd88-450e-bb6a-5cba368b3f05-kube-api-access-mt8xk\") pod \"openstack-operator-controller-operator-77647d9b49-6c75r\" (UID: \"0e55afb9-bd88-450e-bb6a-5cba368b3f05\") " pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:11 crc kubenswrapper[4688]: I0930 17:29:11.628095 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:12 crc kubenswrapper[4688]: I0930 17:29:12.212734 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r"] Sep 30 17:29:12 crc kubenswrapper[4688]: I0930 17:29:12.947460 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" event={"ID":"0e55afb9-bd88-450e-bb6a-5cba368b3f05","Type":"ContainerStarted","Data":"479e942025d0aa9777ea0503952898aa70b74a294f30b3f621056199ca863d4f"} Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.148720 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.151401 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.162550 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.323496 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.323712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.323740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmch5\" (UniqueName: \"kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.425702 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.425767 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmch5\" (UniqueName: \"kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.426001 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.426787 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.426794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.453006 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmch5\" (UniqueName: \"kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5\") pod \"redhat-marketplace-t8xsv\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.492891 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:16 crc kubenswrapper[4688]: I0930 17:29:16.973734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" event={"ID":"0e55afb9-bd88-450e-bb6a-5cba368b3f05","Type":"ContainerStarted","Data":"5909d46efb295a553ceac15bd6d6f97821f17bd5110af4330bef829637baf0bb"} Sep 30 17:29:17 crc kubenswrapper[4688]: I0930 17:29:17.013191 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:17 crc kubenswrapper[4688]: I0930 17:29:17.992547 4688 generic.go:334] "Generic (PLEG): container finished" podID="7561fd83-4098-414f-8038-41bb60b8b33e" containerID="8ac9731275ac5c458a8bf81477397b12d07f42b6fd45192aaea28de585cf83ed" exitCode=0 Sep 30 17:29:17 crc kubenswrapper[4688]: I0930 17:29:17.992691 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerDied","Data":"8ac9731275ac5c458a8bf81477397b12d07f42b6fd45192aaea28de585cf83ed"} Sep 30 17:29:17 crc kubenswrapper[4688]: I0930 17:29:17.992736 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerStarted","Data":"6fea947882b0b8db490beb25e7a65e263a8a7fe8049a55a81daf9c7c7dd107d0"} Sep 30 17:29:19 crc kubenswrapper[4688]: I0930 17:29:19.003162 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" event={"ID":"0e55afb9-bd88-450e-bb6a-5cba368b3f05","Type":"ContainerStarted","Data":"8855d2d7f77dc1bd408208b44548594b5d4e525afd6f76f6cbf67f9c9196b1bd"} Sep 30 17:29:19 crc kubenswrapper[4688]: I0930 17:29:19.003622 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:19 crc kubenswrapper[4688]: I0930 17:29:19.046993 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" podStartSLOduration=1.9087083310000001 podStartE2EDuration="8.046969563s" podCreationTimestamp="2025-09-30 17:29:11 +0000 UTC" firstStartedPulling="2025-09-30 17:29:12.23422578 +0000 UTC m=+809.529663358" lastFinishedPulling="2025-09-30 17:29:18.372487032 +0000 UTC m=+815.667924590" observedRunningTime="2025-09-30 17:29:19.043868433 +0000 UTC m=+816.339306011" watchObservedRunningTime="2025-09-30 17:29:19.046969563 +0000 UTC m=+816.342407121" Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.013814 4688 generic.go:334] "Generic (PLEG): container finished" podID="7561fd83-4098-414f-8038-41bb60b8b33e" containerID="92bd105638f63d032139ec0e818a1c277a7a3a89762b63fe42fb8b4132e4c025" exitCode=0 Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.014432 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerDied","Data":"92bd105638f63d032139ec0e818a1c277a7a3a89762b63fe42fb8b4132e4c025"} Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.743753 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.746229 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.755834 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.903558 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qr7\" (UniqueName: \"kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.903688 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:20 crc kubenswrapper[4688]: I0930 17:29:20.903742 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.005047 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.005163 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qr7\" (UniqueName: \"kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.005207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.005802 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.005892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.024783 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerStarted","Data":"b469b885959a42510c90ffdb1a64c41581f227d10cc7b7f6930a3703bea357cf"} Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.049104 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8xsv" podStartSLOduration=2.714807344 podStartE2EDuration="5.049077468s" podCreationTimestamp="2025-09-30 17:29:16 +0000 UTC" firstStartedPulling="2025-09-30 17:29:18.299751463 +0000 UTC m=+815.595189031" lastFinishedPulling="2025-09-30 17:29:20.634021587 +0000 UTC m=+817.929459155" observedRunningTime="2025-09-30 17:29:21.044696685 +0000 UTC m=+818.340134243" watchObservedRunningTime="2025-09-30 17:29:21.049077468 +0000 UTC m=+818.344515026" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.050524 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qr7\" (UniqueName: \"kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7\") pod \"community-operators-frzw9\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.074689 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.631890 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-77647d9b49-6c75r" Sep 30 17:29:21 crc kubenswrapper[4688]: I0930 17:29:21.766194 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:21 crc kubenswrapper[4688]: W0930 17:29:21.778658 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af3ef4e_5f5c_47b5_b7f6_4bd20168d134.slice/crio-206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b WatchSource:0}: Error finding container 206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b: Status 404 returned error can't find the container with id 206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b Sep 30 17:29:22 crc kubenswrapper[4688]: I0930 17:29:22.033813 4688 generic.go:334] "Generic (PLEG): container finished" podID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerID="a0e342670408e77d3213afe31f58684ceeda1378e2b81ee419f55e2b74fea8d9" exitCode=0 Sep 30 17:29:22 crc kubenswrapper[4688]: I0930 17:29:22.033884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerDied","Data":"a0e342670408e77d3213afe31f58684ceeda1378e2b81ee419f55e2b74fea8d9"} Sep 30 17:29:22 crc kubenswrapper[4688]: I0930 17:29:22.034331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerStarted","Data":"206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b"} Sep 30 17:29:23 crc kubenswrapper[4688]: I0930 17:29:23.044682 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerStarted","Data":"4fa2ae812da948cf20bd914de1843c677d55d59dc0a27db77f51ce6afce82fc0"} Sep 30 17:29:24 crc kubenswrapper[4688]: I0930 17:29:24.052647 4688 generic.go:334] "Generic (PLEG): container finished" podID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerID="4fa2ae812da948cf20bd914de1843c677d55d59dc0a27db77f51ce6afce82fc0" exitCode=0 Sep 30 17:29:24 crc kubenswrapper[4688]: I0930 17:29:24.052704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerDied","Data":"4fa2ae812da948cf20bd914de1843c677d55d59dc0a27db77f51ce6afce82fc0"} Sep 30 17:29:25 crc kubenswrapper[4688]: I0930 17:29:25.065230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerStarted","Data":"9058bf0b2f88b3066ee835cdeb68d9cdd97a543265eaf3ba6a0aa8401b539c97"} Sep 30 17:29:25 crc kubenswrapper[4688]: I0930 17:29:25.089774 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-frzw9" podStartSLOduration=2.644662172 podStartE2EDuration="5.089748598s" podCreationTimestamp="2025-09-30 17:29:20 +0000 UTC" firstStartedPulling="2025-09-30 17:29:22.036144954 +0000 UTC m=+819.331582502" lastFinishedPulling="2025-09-30 17:29:24.48123137 +0000 UTC m=+821.776668928" observedRunningTime="2025-09-30 17:29:25.087722016 +0000 UTC m=+822.383159574" watchObservedRunningTime="2025-09-30 17:29:25.089748598 +0000 UTC m=+822.385186156" Sep 30 17:29:26 crc kubenswrapper[4688]: I0930 17:29:26.493229 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:26 crc kubenswrapper[4688]: I0930 17:29:26.493666 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:26 crc kubenswrapper[4688]: I0930 17:29:26.552857 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:27 crc kubenswrapper[4688]: I0930 17:29:27.124053 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:29 crc kubenswrapper[4688]: I0930 17:29:29.734661 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:29 crc kubenswrapper[4688]: I0930 17:29:29.735028 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8xsv" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="registry-server" containerID="cri-o://b469b885959a42510c90ffdb1a64c41581f227d10cc7b7f6930a3703bea357cf" gracePeriod=2 Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.111506 4688 generic.go:334] "Generic (PLEG): container finished" podID="7561fd83-4098-414f-8038-41bb60b8b33e" containerID="b469b885959a42510c90ffdb1a64c41581f227d10cc7b7f6930a3703bea357cf" exitCode=0 Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.111633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerDied","Data":"b469b885959a42510c90ffdb1a64c41581f227d10cc7b7f6930a3703bea357cf"} Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.111694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8xsv" event={"ID":"7561fd83-4098-414f-8038-41bb60b8b33e","Type":"ContainerDied","Data":"6fea947882b0b8db490beb25e7a65e263a8a7fe8049a55a81daf9c7c7dd107d0"} Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.111708 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fea947882b0b8db490beb25e7a65e263a8a7fe8049a55a81daf9c7c7dd107d0" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.137076 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.264968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmch5\" (UniqueName: \"kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5\") pod \"7561fd83-4098-414f-8038-41bb60b8b33e\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.265112 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content\") pod \"7561fd83-4098-414f-8038-41bb60b8b33e\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.265214 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities\") pod \"7561fd83-4098-414f-8038-41bb60b8b33e\" (UID: \"7561fd83-4098-414f-8038-41bb60b8b33e\") " Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.266736 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities" (OuterVolumeSpecName: "utilities") pod "7561fd83-4098-414f-8038-41bb60b8b33e" (UID: "7561fd83-4098-414f-8038-41bb60b8b33e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.267504 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.302860 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5" (OuterVolumeSpecName: "kube-api-access-pmch5") pod "7561fd83-4098-414f-8038-41bb60b8b33e" (UID: "7561fd83-4098-414f-8038-41bb60b8b33e"). InnerVolumeSpecName "kube-api-access-pmch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.328868 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7561fd83-4098-414f-8038-41bb60b8b33e" (UID: "7561fd83-4098-414f-8038-41bb60b8b33e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.369732 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmch5\" (UniqueName: \"kubernetes.io/projected/7561fd83-4098-414f-8038-41bb60b8b33e-kube-api-access-pmch5\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:30 crc kubenswrapper[4688]: I0930 17:29:30.369812 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7561fd83-4098-414f-8038-41bb60b8b33e-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.075661 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.075739 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.117824 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.120000 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8xsv" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.162369 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.164884 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.178045 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8xsv"] Sep 30 17:29:31 crc kubenswrapper[4688]: I0930 17:29:31.440826 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" path="/var/lib/kubelet/pods/7561fd83-4098-414f-8038-41bb60b8b33e/volumes" Sep 30 17:29:34 crc kubenswrapper[4688]: I0930 17:29:34.734064 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:34 crc kubenswrapper[4688]: I0930 17:29:34.735521 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-frzw9" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="registry-server" containerID="cri-o://9058bf0b2f88b3066ee835cdeb68d9cdd97a543265eaf3ba6a0aa8401b539c97" gracePeriod=2 Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.148514 4688 generic.go:334] "Generic (PLEG): container finished" podID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerID="9058bf0b2f88b3066ee835cdeb68d9cdd97a543265eaf3ba6a0aa8401b539c97" exitCode=0 Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.148634 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerDied","Data":"9058bf0b2f88b3066ee835cdeb68d9cdd97a543265eaf3ba6a0aa8401b539c97"} Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.148674 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frzw9" event={"ID":"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134","Type":"ContainerDied","Data":"206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b"} Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.148693 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206976961eb2dd8233f1a1c0c834fa183896b55966f4503bddb6b7f8d68f308b" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.171892 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.278344 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qr7\" (UniqueName: \"kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7\") pod \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.278429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities\") pod \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.278716 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content\") pod \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\" (UID: \"6af3ef4e-5f5c-47b5-b7f6-4bd20168d134\") " Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.280181 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities" (OuterVolumeSpecName: "utilities") pod "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" (UID: "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.286812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7" (OuterVolumeSpecName: "kube-api-access-m5qr7") pod "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" (UID: "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134"). InnerVolumeSpecName "kube-api-access-m5qr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.328766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" (UID: "6af3ef4e-5f5c-47b5-b7f6-4bd20168d134"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.381184 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qr7\" (UniqueName: \"kubernetes.io/projected/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-kube-api-access-m5qr7\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.381339 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:35 crc kubenswrapper[4688]: I0930 17:29:35.381353 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:29:36 crc kubenswrapper[4688]: I0930 17:29:36.156150 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frzw9" Sep 30 17:29:36 crc kubenswrapper[4688]: I0930 17:29:36.179904 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:36 crc kubenswrapper[4688]: I0930 17:29:36.185206 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-frzw9"] Sep 30 17:29:37 crc kubenswrapper[4688]: I0930 17:29:37.439519 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" path="/var/lib/kubelet/pods/6af3ef4e-5f5c-47b5-b7f6-4bd20168d134/volumes" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.706262 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88"] Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707399 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="extract-utilities" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707420 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="extract-utilities" Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707444 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="extract-content" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707452 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="extract-content" Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707468 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="extract-utilities" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707476 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="extract-utilities" Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707490 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707496 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707506 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707513 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: E0930 17:29:56.707533 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="extract-content" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707540 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="extract-content" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707713 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af3ef4e-5f5c-47b5-b7f6-4bd20168d134" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.707738 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7561fd83-4098-414f-8038-41bb60b8b33e" containerName="registry-server" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.708696 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.713194 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-742kg" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.713383 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.714705 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.717506 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sngdp" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.720664 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.721979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.724018 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hb2gr" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.731705 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.811370 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.814976 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.850024 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.851391 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.852501 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4csf\" (UniqueName: \"kubernetes.io/projected/ef1716f0-3b35-433b-8ad9-5127c151a43f-kube-api-access-k4csf\") pod \"cinder-operator-controller-manager-644bddb6d8-6ng88\" (UID: \"ef1716f0-3b35-433b-8ad9-5127c151a43f\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.852604 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfkx\" (UniqueName: \"kubernetes.io/projected/08d7f313-71b6-4d2e-b67d-c9e1ddaca02d-kube-api-access-jmfkx\") pod \"designate-operator-controller-manager-84f4f7b77b-xnhgk\" (UID: \"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.852673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsqn\" (UniqueName: \"kubernetes.io/projected/b4b7968e-2dbf-463c-addf-f79bb82c5fb3-kube-api-access-lfsqn\") pod \"barbican-operator-controller-manager-6ff8b75857-zrrc8\" (UID: \"b4b7968e-2dbf-463c-addf-f79bb82c5fb3\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.854411 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-d7d88" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.893756 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.901261 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.903255 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.908624 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.910101 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.911983 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nhvqx" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.912989 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t9hl6" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.934659 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.935956 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.938709 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.943375 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j52ls" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.954404 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4mv5\" (UniqueName: \"kubernetes.io/projected/7c01591d-ed02-4a38-b0ec-560d9b41cc8c-kube-api-access-d4mv5\") pod \"heat-operator-controller-manager-5d889d78cf-vgshh\" (UID: \"7c01591d-ed02-4a38-b0ec-560d9b41cc8c\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.954464 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfkx\" (UniqueName: \"kubernetes.io/projected/08d7f313-71b6-4d2e-b67d-c9e1ddaca02d-kube-api-access-jmfkx\") pod \"designate-operator-controller-manager-84f4f7b77b-xnhgk\" (UID: \"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.954502 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bhg\" (UniqueName: \"kubernetes.io/projected/1bea0309-4386-4a05-a980-1104ec2e37c2-kube-api-access-j5bhg\") pod \"glance-operator-controller-manager-84958c4d49-98rhq\" (UID: \"1bea0309-4386-4a05-a980-1104ec2e37c2\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.954530 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsqn\" (UniqueName: \"kubernetes.io/projected/b4b7968e-2dbf-463c-addf-f79bb82c5fb3-kube-api-access-lfsqn\") pod \"barbican-operator-controller-manager-6ff8b75857-zrrc8\" (UID: \"b4b7968e-2dbf-463c-addf-f79bb82c5fb3\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.954607 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4csf\" (UniqueName: \"kubernetes.io/projected/ef1716f0-3b35-433b-8ad9-5127c151a43f-kube-api-access-k4csf\") pod \"cinder-operator-controller-manager-644bddb6d8-6ng88\" (UID: \"ef1716f0-3b35-433b-8ad9-5127c151a43f\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.956073 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.977586 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.994738 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg"] Sep 30 17:29:56 crc kubenswrapper[4688]: I0930 17:29:56.996164 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.004682 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6tsc2" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.015029 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.029997 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfkx\" (UniqueName: \"kubernetes.io/projected/08d7f313-71b6-4d2e-b67d-c9e1ddaca02d-kube-api-access-jmfkx\") pod \"designate-operator-controller-manager-84f4f7b77b-xnhgk\" (UID: \"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.045632 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.045715 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4csf\" (UniqueName: \"kubernetes.io/projected/ef1716f0-3b35-433b-8ad9-5127c151a43f-kube-api-access-k4csf\") pod \"cinder-operator-controller-manager-644bddb6d8-6ng88\" (UID: \"ef1716f0-3b35-433b-8ad9-5127c151a43f\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.049829 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.055722 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.055877 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4mv5\" (UniqueName: \"kubernetes.io/projected/7c01591d-ed02-4a38-b0ec-560d9b41cc8c-kube-api-access-d4mv5\") pod \"heat-operator-controller-manager-5d889d78cf-vgshh\" (UID: \"7c01591d-ed02-4a38-b0ec-560d9b41cc8c\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.055927 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.055952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9bz\" (UniqueName: \"kubernetes.io/projected/940175df-0d37-4540-b0c7-a81bc3261ba6-kube-api-access-hs9bz\") pod \"ironic-operator-controller-manager-7975b88857-7l8bg\" (UID: \"940175df-0d37-4540-b0c7-a81bc3261ba6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.055973 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bhg\" (UniqueName: \"kubernetes.io/projected/1bea0309-4386-4a05-a980-1104ec2e37c2-kube-api-access-j5bhg\") pod \"glance-operator-controller-manager-84958c4d49-98rhq\" (UID: \"1bea0309-4386-4a05-a980-1104ec2e37c2\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.056014 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6kg\" (UniqueName: \"kubernetes.io/projected/8eba803d-c206-4418-a0b6-8594af2670aa-kube-api-access-lh6kg\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.056044 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfqj\" (UniqueName: \"kubernetes.io/projected/8a89103b-24b0-456d-88c5-f6abd68e75fb-kube-api-access-kzfqj\") pod \"horizon-operator-controller-manager-9f4696d94-bklgp\" (UID: \"8a89103b-24b0-456d-88c5-f6abd68e75fb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.065982 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.067239 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.068068 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bgkml" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.068199 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsqn\" (UniqueName: \"kubernetes.io/projected/b4b7968e-2dbf-463c-addf-f79bb82c5fb3-kube-api-access-lfsqn\") pod \"barbican-operator-controller-manager-6ff8b75857-zrrc8\" (UID: \"b4b7968e-2dbf-463c-addf-f79bb82c5fb3\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.075845 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-886sw"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.076076 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-l7jkd" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.077168 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.083004 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lxslv" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.091946 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bhg\" (UniqueName: \"kubernetes.io/projected/1bea0309-4386-4a05-a980-1104ec2e37c2-kube-api-access-j5bhg\") pod \"glance-operator-controller-manager-84958c4d49-98rhq\" (UID: \"1bea0309-4386-4a05-a980-1104ec2e37c2\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.108382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.115431 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4mv5\" (UniqueName: \"kubernetes.io/projected/7c01591d-ed02-4a38-b0ec-560d9b41cc8c-kube-api-access-d4mv5\") pod \"heat-operator-controller-manager-5d889d78cf-vgshh\" (UID: \"7c01591d-ed02-4a38-b0ec-560d9b41cc8c\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.119278 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.120011 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.133127 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.146942 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157713 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfqj\" (UniqueName: \"kubernetes.io/projected/8a89103b-24b0-456d-88c5-f6abd68e75fb-kube-api-access-kzfqj\") pod \"horizon-operator-controller-manager-9f4696d94-bklgp\" (UID: \"8a89103b-24b0-456d-88c5-f6abd68e75fb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lvt\" (UniqueName: \"kubernetes.io/projected/505c7e1c-5b6c-4736-8a1f-93ed5a92b47e-kube-api-access-95lvt\") pod \"keystone-operator-controller-manager-5bd55b4bff-t64tg\" (UID: \"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157810 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lsr\" (UniqueName: \"kubernetes.io/projected/398df167-3c2c-416f-b77c-baafa00ce747-kube-api-access-p5lsr\") pod \"manila-operator-controller-manager-6d68dbc695-7tgsx\" (UID: \"398df167-3c2c-416f-b77c-baafa00ce747\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7cr8\" (UniqueName: \"kubernetes.io/projected/f9796897-79b7-4273-8f85-f3c064a465af-kube-api-access-m7cr8\") pod \"mariadb-operator-controller-manager-88c7-886sw\" (UID: \"f9796897-79b7-4273-8f85-f3c064a465af\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157895 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9bz\" (UniqueName: \"kubernetes.io/projected/940175df-0d37-4540-b0c7-a81bc3261ba6-kube-api-access-hs9bz\") pod \"ironic-operator-controller-manager-7975b88857-7l8bg\" (UID: \"940175df-0d37-4540-b0c7-a81bc3261ba6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.157977 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6kg\" (UniqueName: \"kubernetes.io/projected/8eba803d-c206-4418-a0b6-8594af2670aa-kube-api-access-lh6kg\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.158606 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.158662 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert podName:8eba803d-c206-4418-a0b6-8594af2670aa nodeName:}" failed. No retries permitted until 2025-09-30 17:29:57.658644052 +0000 UTC m=+854.954081610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert") pod "infra-operator-controller-manager-7d857cc749-zslc4" (UID: "8eba803d-c206-4418-a0b6-8594af2670aa") : secret "infra-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.172669 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.174226 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.178954 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.189120 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rc5s9" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.195788 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-886sw"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.196003 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6kg\" (UniqueName: \"kubernetes.io/projected/8eba803d-c206-4418-a0b6-8594af2670aa-kube-api-access-lh6kg\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.212212 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.216031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.226725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfqj\" (UniqueName: \"kubernetes.io/projected/8a89103b-24b0-456d-88c5-f6abd68e75fb-kube-api-access-kzfqj\") pod \"horizon-operator-controller-manager-9f4696d94-bklgp\" (UID: \"8a89103b-24b0-456d-88c5-f6abd68e75fb\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.260233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ljl\" (UniqueName: \"kubernetes.io/projected/16f6e3cb-02d6-4820-953b-a728440b812f-kube-api-access-n7ljl\") pod \"neutron-operator-controller-manager-64d7b59854-h8tzm\" (UID: \"16f6e3cb-02d6-4820-953b-a728440b812f\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.260340 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95lvt\" (UniqueName: \"kubernetes.io/projected/505c7e1c-5b6c-4736-8a1f-93ed5a92b47e-kube-api-access-95lvt\") pod \"keystone-operator-controller-manager-5bd55b4bff-t64tg\" (UID: \"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.260389 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lsr\" (UniqueName: \"kubernetes.io/projected/398df167-3c2c-416f-b77c-baafa00ce747-kube-api-access-p5lsr\") pod \"manila-operator-controller-manager-6d68dbc695-7tgsx\" (UID: \"398df167-3c2c-416f-b77c-baafa00ce747\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.260446 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76kz\" (UniqueName: \"kubernetes.io/projected/ef369140-3b22-4cd7-926f-525ebde98f1a-kube-api-access-p76kz\") pod \"nova-operator-controller-manager-c7c776c96-q5q9z\" (UID: \"ef369140-3b22-4cd7-926f-525ebde98f1a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.260506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7cr8\" (UniqueName: \"kubernetes.io/projected/f9796897-79b7-4273-8f85-f3c064a465af-kube-api-access-m7cr8\") pod \"mariadb-operator-controller-manager-88c7-886sw\" (UID: \"f9796897-79b7-4273-8f85-f3c064a465af\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.270162 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.294107 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rwx9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.294406 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.314940 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.325497 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95lvt\" (UniqueName: \"kubernetes.io/projected/505c7e1c-5b6c-4736-8a1f-93ed5a92b47e-kube-api-access-95lvt\") pod \"keystone-operator-controller-manager-5bd55b4bff-t64tg\" (UID: \"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.325549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9bz\" (UniqueName: \"kubernetes.io/projected/940175df-0d37-4540-b0c7-a81bc3261ba6-kube-api-access-hs9bz\") pod \"ironic-operator-controller-manager-7975b88857-7l8bg\" (UID: \"940175df-0d37-4540-b0c7-a81bc3261ba6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.325588 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lsr\" (UniqueName: \"kubernetes.io/projected/398df167-3c2c-416f-b77c-baafa00ce747-kube-api-access-p5lsr\") pod \"manila-operator-controller-manager-6d68dbc695-7tgsx\" (UID: \"398df167-3c2c-416f-b77c-baafa00ce747\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.363848 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7cr8\" (UniqueName: \"kubernetes.io/projected/f9796897-79b7-4273-8f85-f3c064a465af-kube-api-access-m7cr8\") pod \"mariadb-operator-controller-manager-88c7-886sw\" (UID: \"f9796897-79b7-4273-8f85-f3c064a465af\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.386717 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.387724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ljl\" (UniqueName: \"kubernetes.io/projected/16f6e3cb-02d6-4820-953b-a728440b812f-kube-api-access-n7ljl\") pod \"neutron-operator-controller-manager-64d7b59854-h8tzm\" (UID: \"16f6e3cb-02d6-4820-953b-a728440b812f\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.387836 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76kz\" (UniqueName: \"kubernetes.io/projected/ef369140-3b22-4cd7-926f-525ebde98f1a-kube-api-access-p76kz\") pod \"nova-operator-controller-manager-c7c776c96-q5q9z\" (UID: \"ef369140-3b22-4cd7-926f-525ebde98f1a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.388174 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.393182 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dsz5s" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.401756 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.422485 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76kz\" (UniqueName: \"kubernetes.io/projected/ef369140-3b22-4cd7-926f-525ebde98f1a-kube-api-access-p76kz\") pod \"nova-operator-controller-manager-c7c776c96-q5q9z\" (UID: \"ef369140-3b22-4cd7-926f-525ebde98f1a\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.423717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ljl\" (UniqueName: \"kubernetes.io/projected/16f6e3cb-02d6-4820-953b-a728440b812f-kube-api-access-n7ljl\") pod \"neutron-operator-controller-manager-64d7b59854-h8tzm\" (UID: \"16f6e3cb-02d6-4820-953b-a728440b812f\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.446695 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.490065 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.490565 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.491041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sssw\" (UniqueName: \"kubernetes.io/projected/f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d-kube-api-access-2sssw\") pod \"octavia-operator-controller-manager-76fcc6dc7c-69zkg\" (UID: \"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.492303 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.506007 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zl69x" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.506243 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.506338 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.515332 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.518158 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.521638 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-s6p5r" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.560422 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.572860 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.583156 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.592107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.595887 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8mw\" (UniqueName: \"kubernetes.io/projected/145c6edc-16b0-47ec-846b-638e354aa1dd-kube-api-access-qw8mw\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.596041 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sssw\" (UniqueName: \"kubernetes.io/projected/f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d-kube-api-access-2sssw\") pod \"octavia-operator-controller-manager-76fcc6dc7c-69zkg\" (UID: \"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.596093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6c9f\" (UniqueName: \"kubernetes.io/projected/9e74b6a2-2c57-4514-9709-9db9b0a76d55-kube-api-access-w6c9f\") pod \"ovn-operator-controller-manager-5ddfc99c65-m7slc\" (UID: \"9e74b6a2-2c57-4514-9709-9db9b0a76d55\") " pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.614676 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.631659 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.633565 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sssw\" (UniqueName: \"kubernetes.io/projected/f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d-kube-api-access-2sssw\") pod \"octavia-operator-controller-manager-76fcc6dc7c-69zkg\" (UID: \"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.644697 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.644831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.649125 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b8qp4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.650679 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.650793 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.652602 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.655972 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-f86wc" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.661280 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.674032 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.676780 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.678330 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.682665 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.684009 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.688520 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nd9ds" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.688847 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-472mq" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.699732 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.699796 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8mw\" (UniqueName: \"kubernetes.io/projected/145c6edc-16b0-47ec-846b-638e354aa1dd-kube-api-access-qw8mw\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.699851 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6c9f\" (UniqueName: \"kubernetes.io/projected/9e74b6a2-2c57-4514-9709-9db9b0a76d55-kube-api-access-w6c9f\") pod \"ovn-operator-controller-manager-5ddfc99c65-m7slc\" (UID: \"9e74b6a2-2c57-4514-9709-9db9b0a76d55\") " pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.699884 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.700027 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.700080 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert podName:8eba803d-c206-4418-a0b6-8594af2670aa nodeName:}" failed. No retries permitted until 2025-09-30 17:29:58.700065017 +0000 UTC m=+855.995502575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert") pod "infra-operator-controller-manager-7d857cc749-zslc4" (UID: "8eba803d-c206-4418-a0b6-8594af2670aa") : secret "infra-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.700452 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: E0930 17:29:57.700506 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert podName:145c6edc-16b0-47ec-846b-638e354aa1dd nodeName:}" failed. No retries permitted until 2025-09-30 17:29:58.200497218 +0000 UTC m=+855.495934776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-m24qn" (UID: "145c6edc-16b0-47ec-846b-638e354aa1dd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.717586 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.718892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.720783 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mv59h" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.724735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.737420 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.738398 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.745839 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6c9f\" (UniqueName: \"kubernetes.io/projected/9e74b6a2-2c57-4514-9709-9db9b0a76d55-kube-api-access-w6c9f\") pod \"ovn-operator-controller-manager-5ddfc99c65-m7slc\" (UID: \"9e74b6a2-2c57-4514-9709-9db9b0a76d55\") " pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.747800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8mw\" (UniqueName: \"kubernetes.io/projected/145c6edc-16b0-47ec-846b-638e354aa1dd-kube-api-access-qw8mw\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.766825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.797760 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.800073 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.806489 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4n8j4" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.806839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.808350 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgvr\" (UniqueName: \"kubernetes.io/projected/26fa1afc-ed5f-4541-aab0-e003a11f653c-kube-api-access-9rgvr\") pod \"test-operator-controller-manager-f66b554c6-cdx5c\" (UID: \"26fa1afc-ed5f-4541-aab0-e003a11f653c\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.808476 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmv8\" (UniqueName: \"kubernetes.io/projected/0768b406-8010-4804-8769-634ee21211ed-kube-api-access-gkmv8\") pod \"swift-operator-controller-manager-bc7dc7bd9-wkxsn\" (UID: \"0768b406-8010-4804-8769-634ee21211ed\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.808513 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbr4h\" (UniqueName: \"kubernetes.io/projected/70e2e81b-be62-472a-8b70-d5aadace742e-kube-api-access-tbr4h\") pod \"placement-operator-controller-manager-589c58c6c-rd5sx\" (UID: \"70e2e81b-be62-472a-8b70-d5aadace742e\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.808554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg7rq\" (UniqueName: \"kubernetes.io/projected/19e2aa18-5fe3-4451-8ca1-303b87a5e131-kube-api-access-qg7rq\") pod \"watcher-operator-controller-manager-76669f99c-f2nzh\" (UID: \"19e2aa18-5fe3-4451-8ca1-303b87a5e131\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.808606 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65hq\" (UniqueName: \"kubernetes.io/projected/e2803e31-92d8-4071-8a82-84249fc01718-kube-api-access-d65hq\") pod \"telemetry-operator-controller-manager-b8d54b5d7-g6d6b\" (UID: \"e2803e31-92d8-4071-8a82-84249fc01718\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.817444 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.858765 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.865991 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.869662 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j5ldk" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.883919 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr"] Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.931235 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmv8\" (UniqueName: \"kubernetes.io/projected/0768b406-8010-4804-8769-634ee21211ed-kube-api-access-gkmv8\") pod \"swift-operator-controller-manager-bc7dc7bd9-wkxsn\" (UID: \"0768b406-8010-4804-8769-634ee21211ed\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.931550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbr4h\" (UniqueName: \"kubernetes.io/projected/70e2e81b-be62-472a-8b70-d5aadace742e-kube-api-access-tbr4h\") pod \"placement-operator-controller-manager-589c58c6c-rd5sx\" (UID: \"70e2e81b-be62-472a-8b70-d5aadace742e\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.931715 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7rq\" (UniqueName: \"kubernetes.io/projected/19e2aa18-5fe3-4451-8ca1-303b87a5e131-kube-api-access-qg7rq\") pod \"watcher-operator-controller-manager-76669f99c-f2nzh\" (UID: \"19e2aa18-5fe3-4451-8ca1-303b87a5e131\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.931838 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65hq\" (UniqueName: \"kubernetes.io/projected/e2803e31-92d8-4071-8a82-84249fc01718-kube-api-access-d65hq\") pod \"telemetry-operator-controller-manager-b8d54b5d7-g6d6b\" (UID: \"e2803e31-92d8-4071-8a82-84249fc01718\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.931988 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgvr\" (UniqueName: \"kubernetes.io/projected/26fa1afc-ed5f-4541-aab0-e003a11f653c-kube-api-access-9rgvr\") pod \"test-operator-controller-manager-f66b554c6-cdx5c\" (UID: \"26fa1afc-ed5f-4541-aab0-e003a11f653c\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.932078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.932333 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vflj7\" (UniqueName: \"kubernetes.io/projected/fc9e0382-183b-41ab-8890-9ad95c9ebba4-kube-api-access-vflj7\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.969491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65hq\" (UniqueName: \"kubernetes.io/projected/e2803e31-92d8-4071-8a82-84249fc01718-kube-api-access-d65hq\") pod \"telemetry-operator-controller-manager-b8d54b5d7-g6d6b\" (UID: \"e2803e31-92d8-4071-8a82-84249fc01718\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.969533 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7rq\" (UniqueName: \"kubernetes.io/projected/19e2aa18-5fe3-4451-8ca1-303b87a5e131-kube-api-access-qg7rq\") pod \"watcher-operator-controller-manager-76669f99c-f2nzh\" (UID: \"19e2aa18-5fe3-4451-8ca1-303b87a5e131\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.969558 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgvr\" (UniqueName: \"kubernetes.io/projected/26fa1afc-ed5f-4541-aab0-e003a11f653c-kube-api-access-9rgvr\") pod \"test-operator-controller-manager-f66b554c6-cdx5c\" (UID: \"26fa1afc-ed5f-4541-aab0-e003a11f653c\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.971200 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmv8\" (UniqueName: \"kubernetes.io/projected/0768b406-8010-4804-8769-634ee21211ed-kube-api-access-gkmv8\") pod \"swift-operator-controller-manager-bc7dc7bd9-wkxsn\" (UID: \"0768b406-8010-4804-8769-634ee21211ed\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:29:57 crc kubenswrapper[4688]: I0930 17:29:57.977972 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbr4h\" (UniqueName: \"kubernetes.io/projected/70e2e81b-be62-472a-8b70-d5aadace742e-kube-api-access-tbr4h\") pod \"placement-operator-controller-manager-589c58c6c-rd5sx\" (UID: \"70e2e81b-be62-472a-8b70-d5aadace742e\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:57.999994 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.040593 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.041492 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vflj7\" (UniqueName: \"kubernetes.io/projected/fc9e0382-183b-41ab-8890-9ad95c9ebba4-kube-api-access-vflj7\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.041602 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbw4\" (UniqueName: \"kubernetes.io/projected/a01178d9-6b72-442e-b455-e5625b5e154e-kube-api-access-zpbw4\") pod \"rabbitmq-cluster-operator-manager-79d8469568-hxqmr\" (UID: \"a01178d9-6b72-442e-b455-e5625b5e154e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.041688 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: E0930 17:29:58.041829 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Sep 30 17:29:58 crc kubenswrapper[4688]: E0930 17:29:58.041879 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert podName:fc9e0382-183b-41ab-8890-9ad95c9ebba4 nodeName:}" failed. No retries permitted until 2025-09-30 17:29:58.541860936 +0000 UTC m=+855.837298494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert") pod "openstack-operator-controller-manager-89bc4fb57-2x7jt" (UID: "fc9e0382-183b-41ab-8890-9ad95c9ebba4") : secret "webhook-server-cert" not found Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.071108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vflj7\" (UniqueName: \"kubernetes.io/projected/fc9e0382-183b-41ab-8890-9ad95c9ebba4-kube-api-access-vflj7\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.098502 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.137952 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.143173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbw4\" (UniqueName: \"kubernetes.io/projected/a01178d9-6b72-442e-b455-e5625b5e154e-kube-api-access-zpbw4\") pod \"rabbitmq-cluster-operator-manager-79d8469568-hxqmr\" (UID: \"a01178d9-6b72-442e-b455-e5625b5e154e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.179161 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.186867 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbw4\" (UniqueName: \"kubernetes.io/projected/a01178d9-6b72-442e-b455-e5625b5e154e-kube-api-access-zpbw4\") pod \"rabbitmq-cluster-operator-manager-79d8469568-hxqmr\" (UID: \"a01178d9-6b72-442e-b455-e5625b5e154e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.196935 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.218368 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.241410 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.250883 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:58 crc kubenswrapper[4688]: E0930 17:29:58.251115 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:58 crc kubenswrapper[4688]: E0930 17:29:58.251198 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert podName:145c6edc-16b0-47ec-846b-638e354aa1dd nodeName:}" failed. No retries permitted until 2025-09-30 17:29:59.251171132 +0000 UTC m=+856.546608690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-m24qn" (UID: "145c6edc-16b0-47ec-846b-638e354aa1dd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.268227 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.307000 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.349193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" event={"ID":"1bea0309-4386-4a05-a980-1104ec2e37c2","Type":"ContainerStarted","Data":"7a806b51e2ceaae87649c50052963f15e6b3e915f6d46542656ebdc616ba3c82"} Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.560317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.569557 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9e0382-183b-41ab-8890-9ad95c9ebba4-cert\") pod \"openstack-operator-controller-manager-89bc4fb57-2x7jt\" (UID: \"fc9e0382-183b-41ab-8890-9ad95c9ebba4\") " pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.589962 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.590880 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.642497 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.649806 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk"] Sep 30 17:29:58 crc kubenswrapper[4688]: W0930 17:29:58.675335 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c01591d_ed02_4a38_b0ec_560d9b41cc8c.slice/crio-3fd87b5bf2fc529aa9db685fe93010991eafdf95d7538623e211beb63ef39f4d WatchSource:0}: Error finding container 3fd87b5bf2fc529aa9db685fe93010991eafdf95d7538623e211beb63ef39f4d: Status 404 returned error can't find the container with id 3fd87b5bf2fc529aa9db685fe93010991eafdf95d7538623e211beb63ef39f4d Sep 30 17:29:58 crc kubenswrapper[4688]: W0930 17:29:58.684787 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d7f313_71b6_4d2e_b67d_c9e1ddaca02d.slice/crio-fb098953c1f9d01eb3024791fc9ae370c3be402879fc2dedc6d819bf4be28594 WatchSource:0}: Error finding container fb098953c1f9d01eb3024791fc9ae370c3be402879fc2dedc6d819bf4be28594: Status 404 returned error can't find the container with id fb098953c1f9d01eb3024791fc9ae370c3be402879fc2dedc6d819bf4be28594 Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.763472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.779812 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8eba803d-c206-4418-a0b6-8594af2670aa-cert\") pod \"infra-operator-controller-manager-7d857cc749-zslc4\" (UID: \"8eba803d-c206-4418-a0b6-8594af2670aa\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.780475 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.796349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.800947 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.807624 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg"] Sep 30 17:29:58 crc kubenswrapper[4688]: I0930 17:29:58.813469 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg"] Sep 30 17:29:58 crc kubenswrapper[4688]: W0930 17:29:58.819152 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef369140_3b22_4cd7_926f_525ebde98f1a.slice/crio-4ee7902967937effd74b9ae0d646f67f1c8da9d63842a5944a1ac5365e3162bc WatchSource:0}: Error finding container 4ee7902967937effd74b9ae0d646f67f1c8da9d63842a5944a1ac5365e3162bc: Status 404 returned error can't find the container with id 4ee7902967937effd74b9ae0d646f67f1c8da9d63842a5944a1ac5365e3162bc Sep 30 17:29:58 crc kubenswrapper[4688]: W0930 17:29:58.841961 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505c7e1c_5b6c_4736_8a1f_93ed5a92b47e.slice/crio-62ab3fe5adf811afee1afc0233f0337e4443eb1ed755cb723c266cbf91047fb9 WatchSource:0}: Error finding container 62ab3fe5adf811afee1afc0233f0337e4443eb1ed755cb723c266cbf91047fb9: Status 404 returned error can't find the container with id 62ab3fe5adf811afee1afc0233f0337e4443eb1ed755cb723c266cbf91047fb9 Sep 30 17:29:58 crc kubenswrapper[4688]: W0930 17:29:58.852974 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f6e3cb_02d6_4820_953b_a728440b812f.slice/crio-5b7323b16a5e08a585c6be28e34193276e1a316c9972e13209ddf74cb5673321 WatchSource:0}: Error finding container 5b7323b16a5e08a585c6be28e34193276e1a316c9972e13209ddf74cb5673321: Status 404 returned error can't find the container with id 5b7323b16a5e08a585c6be28e34193276e1a316c9972e13209ddf74cb5673321 Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.199895 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.263226 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.275090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.275275 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.275349 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert podName:145c6edc-16b0-47ec-846b-638e354aa1dd nodeName:}" failed. No retries permitted until 2025-09-30 17:30:01.275328655 +0000 UTC m=+858.570766213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-m24qn" (UID: "145c6edc-16b0-47ec-846b-638e354aa1dd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.291636 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.298820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.306514 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-886sw"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.311019 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.316090 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.318731 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.322677 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.325874 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn"] Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.336489 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod398df167_3c2c_416f_b77c_baafa00ce747.slice/crio-978f1efedbc1cd8652ac12da4fd76816b02cb2f2ff7a10d48d0fd169bcc844a2 WatchSource:0}: Error finding container 978f1efedbc1cd8652ac12da4fd76816b02cb2f2ff7a10d48d0fd169bcc844a2: Status 404 returned error can't find the container with id 978f1efedbc1cd8652ac12da4fd76816b02cb2f2ff7a10d48d0fd169bcc844a2 Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.341610 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9796897_79b7_4273_8f85_f3c064a465af.slice/crio-4c75e828c289ccf00a2076843d4dbbe3b0de2106218c779680c3b5858fcf510d WatchSource:0}: Error finding container 4c75e828c289ccf00a2076843d4dbbe3b0de2106218c779680c3b5858fcf510d: Status 404 returned error can't find the container with id 4c75e828c289ccf00a2076843d4dbbe3b0de2106218c779680c3b5858fcf510d Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.353475 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5lsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6d68dbc695-7tgsx_openstack-operators(398df167-3c2c-416f-b77c-baafa00ce747): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.362904 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0768b406_8010_4804_8769_634ee21211ed.slice/crio-b0cf868ab9c090759cedd66f90aa0214362bad65ae32f81e4749d0d51860aea2 WatchSource:0}: Error finding container b0cf868ab9c090759cedd66f90aa0214362bad65ae32f81e4749d0d51860aea2: Status 404 returned error can't find the container with id b0cf868ab9c090759cedd66f90aa0214362bad65ae32f81e4749d0d51860aea2 Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.367556 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e74b6a2_2c57_4514_9709_9db9b0a76d55.slice/crio-d192219669d7a7e8e61bfb12d203646e567779832c194df1b81b5c1b7d08fe3b WatchSource:0}: Error finding container d192219669d7a7e8e61bfb12d203646e567779832c194df1b81b5c1b7d08fe3b: Status 404 returned error can't find the container with id d192219669d7a7e8e61bfb12d203646e567779832c194df1b81b5c1b7d08fe3b Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.367723 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" event={"ID":"16f6e3cb-02d6-4820-953b-a728440b812f","Type":"ContainerStarted","Data":"5b7323b16a5e08a585c6be28e34193276e1a316c9972e13209ddf74cb5673321"} Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.367856 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7cr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-88c7-886sw_openstack-operators(f9796897-79b7-4273-8f85-f3c064a465af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.372668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" event={"ID":"398df167-3c2c-416f-b77c-baafa00ce747","Type":"ContainerStarted","Data":"978f1efedbc1cd8652ac12da4fd76816b02cb2f2ff7a10d48d0fd169bcc844a2"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.376866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" event={"ID":"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d","Type":"ContainerStarted","Data":"dd0f696bffef32def3c5708c820147f5313ad6a0e74771e48d7274931872a35c"} Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.378409 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkmv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bc7dc7bd9-wkxsn_openstack-operators(0768b406-8010-4804-8769-634ee21211ed): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.378560 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.212:5001/openstack-k8s-operators/ovn-operator:0ca9008685e7fe560ac65d84b9640c421a2338c2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6c9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5ddfc99c65-m7slc_openstack-operators(9e74b6a2-2c57-4514-9709-9db9b0a76d55): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.386229 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.388326 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" event={"ID":"b4b7968e-2dbf-463c-addf-f79bb82c5fb3","Type":"ContainerStarted","Data":"aaab102a4bd9dfe5f086fb24a70b97748e596a65d63332d73210dc7601bb0dd9"} Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.391311 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e2e81b_be62_472a_8b70_d5aadace742e.slice/crio-2fbe99cea349faea787524ce6f20e7605d15c41e237047618e9b3920709109e0 WatchSource:0}: Error finding container 2fbe99cea349faea787524ce6f20e7605d15c41e237047618e9b3920709109e0: Status 404 returned error can't find the container with id 2fbe99cea349faea787524ce6f20e7605d15c41e237047618e9b3920709109e0 Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.397656 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" event={"ID":"26fa1afc-ed5f-4541-aab0-e003a11f653c","Type":"ContainerStarted","Data":"f91a141987656ae7b739f82a801b32d28d773a575318708c97c0e1de35d40b7b"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.403500 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" event={"ID":"a01178d9-6b72-442e-b455-e5625b5e154e","Type":"ContainerStarted","Data":"029efdb2bd6749ed4e278e67e8accc71c33832298e5850d0e1f9f3fdb332e693"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.406849 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" event={"ID":"7c01591d-ed02-4a38-b0ec-560d9b41cc8c","Type":"ContainerStarted","Data":"3fd87b5bf2fc529aa9db685fe93010991eafdf95d7538623e211beb63ef39f4d"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.417688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" event={"ID":"ef1716f0-3b35-433b-8ad9-5127c151a43f","Type":"ContainerStarted","Data":"a16c281a9d32215a10ca9c3c41d18704b21f7bd6749cc2e906e72ac3aecd9103"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.423099 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4"] Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.426149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" event={"ID":"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d","Type":"ContainerStarted","Data":"fb098953c1f9d01eb3024791fc9ae370c3be402879fc2dedc6d819bf4be28594"} Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.426193 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbr4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-589c58c6c-rd5sx_openstack-operators(70e2e81b-be62-472a-8b70-d5aadace742e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.444933 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9e0382_183b_41ab_8890_9ad95c9ebba4.slice/crio-86ed3aa330952e50d4bff6f49050fcbe55308937eb87cd820fa88923f55800f1 WatchSource:0}: Error finding container 86ed3aa330952e50d4bff6f49050fcbe55308937eb87cd820fa88923f55800f1: Status 404 returned error can't find the container with id 86ed3aa330952e50d4bff6f49050fcbe55308937eb87cd820fa88923f55800f1 Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.458301 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" event={"ID":"19e2aa18-5fe3-4451-8ca1-303b87a5e131","Type":"ContainerStarted","Data":"ce4d5ffd0366f7c8dec399c41bca255252d3cb6534e9ee2fbded9675b1b35013"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.458385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" event={"ID":"940175df-0d37-4540-b0c7-a81bc3261ba6","Type":"ContainerStarted","Data":"945b309bc358878b61b5f9e17bf0b49c821b153d4ec14ff171bc65958b7502d2"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.458405 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" event={"ID":"e2803e31-92d8-4071-8a82-84249fc01718","Type":"ContainerStarted","Data":"724e23b051321b0254c5cdfa42ffa16503b901ab49e71d9b648a37a90a439df8"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.458422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" event={"ID":"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e","Type":"ContainerStarted","Data":"62ab3fe5adf811afee1afc0233f0337e4443eb1ed755cb723c266cbf91047fb9"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.462713 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" event={"ID":"ef369140-3b22-4cd7-926f-525ebde98f1a","Type":"ContainerStarted","Data":"4ee7902967937effd74b9ae0d646f67f1c8da9d63842a5944a1ac5365e3162bc"} Sep 30 17:29:59 crc kubenswrapper[4688]: I0930 17:29:59.466921 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" event={"ID":"8a89103b-24b0-456d-88c5-f6abd68e75fb","Type":"ContainerStarted","Data":"c69903dba37ff3acf4999f3b80dc0d29e39ad8b8eef9cdaac972af2c539ff5e9"} Sep 30 17:29:59 crc kubenswrapper[4688]: W0930 17:29:59.474514 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eba803d_c206_4418_a0b6_8594af2670aa.slice/crio-fb99c98770c159735820b3280d949344bc3ebc4d8c899862e34526a33e589094 WatchSource:0}: Error finding container fb99c98770c159735820b3280d949344bc3ebc4d8c899862e34526a33e589094: Status 404 returned error can't find the container with id fb99c98770c159735820b3280d949344bc3ebc4d8c899862e34526a33e589094 Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.671587 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" podUID="f9796897-79b7-4273-8f85-f3c064a465af" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.688228 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" podUID="398df167-3c2c-416f-b77c-baafa00ce747" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.771866 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" podUID="0768b406-8010-4804-8769-634ee21211ed" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.789683 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" podUID="9e74b6a2-2c57-4514-9709-9db9b0a76d55" Sep 30 17:29:59 crc kubenswrapper[4688]: E0930 17:29:59.841723 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" podUID="70e2e81b-be62-472a-8b70-d5aadace742e" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.159509 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k"] Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.165707 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.170432 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.170799 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.177945 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k"] Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.307947 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv4w\" (UniqueName: \"kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.308081 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.308114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.409675 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjv4w\" (UniqueName: \"kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.409804 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.409827 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.411819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.425585 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.434705 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjv4w\" (UniqueName: \"kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w\") pod \"collect-profiles-29320890-dk82k\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.514274 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" event={"ID":"0768b406-8010-4804-8769-634ee21211ed","Type":"ContainerStarted","Data":"cf55e96c1597206e12b8a3334f62fcfe101aa36f10e4999e1bd88f1000faff6d"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.514342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" event={"ID":"0768b406-8010-4804-8769-634ee21211ed","Type":"ContainerStarted","Data":"b0cf868ab9c090759cedd66f90aa0214362bad65ae32f81e4749d0d51860aea2"} Sep 30 17:30:00 crc kubenswrapper[4688]: E0930 17:30:00.520036 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" podUID="0768b406-8010-4804-8769-634ee21211ed" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.522061 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" event={"ID":"f9796897-79b7-4273-8f85-f3c064a465af","Type":"ContainerStarted","Data":"9971a4cd86eea6f79a8cb0bac1cfce20da1d427063dedc0b0a8640e4f87dd73a"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.522225 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" event={"ID":"f9796897-79b7-4273-8f85-f3c064a465af","Type":"ContainerStarted","Data":"4c75e828c289ccf00a2076843d4dbbe3b0de2106218c779680c3b5858fcf510d"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.524783 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" event={"ID":"8eba803d-c206-4418-a0b6-8594af2670aa","Type":"ContainerStarted","Data":"fb99c98770c159735820b3280d949344bc3ebc4d8c899862e34526a33e589094"} Sep 30 17:30:00 crc kubenswrapper[4688]: E0930 17:30:00.528187 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" podUID="f9796897-79b7-4273-8f85-f3c064a465af" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.545368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" event={"ID":"fc9e0382-183b-41ab-8890-9ad95c9ebba4","Type":"ContainerStarted","Data":"b180ef25e02d62ab0548409d4062ef1f10f39e86988053bdbe545406fb09cf85"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.545430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" event={"ID":"fc9e0382-183b-41ab-8890-9ad95c9ebba4","Type":"ContainerStarted","Data":"89340234f21b6c0ae1ae2019d30bff5a010759f8bb0a54dac2f141f68f64e0f6"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.545443 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" event={"ID":"fc9e0382-183b-41ab-8890-9ad95c9ebba4","Type":"ContainerStarted","Data":"86ed3aa330952e50d4bff6f49050fcbe55308937eb87cd820fa88923f55800f1"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.545482 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.550024 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.567169 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" event={"ID":"398df167-3c2c-416f-b77c-baafa00ce747","Type":"ContainerStarted","Data":"c813f52d6c9ec18cea3cc8eab772750188857a488654c5889ed065009d7430f3"} Sep 30 17:30:00 crc kubenswrapper[4688]: E0930 17:30:00.572178 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" podUID="398df167-3c2c-416f-b77c-baafa00ce747" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.575977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" event={"ID":"70e2e81b-be62-472a-8b70-d5aadace742e","Type":"ContainerStarted","Data":"fafb2483f738a23f22733611ccab0cedf6c12ad92f1604452037b290f109ca55"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.576038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" event={"ID":"70e2e81b-be62-472a-8b70-d5aadace742e","Type":"ContainerStarted","Data":"2fbe99cea349faea787524ce6f20e7605d15c41e237047618e9b3920709109e0"} Sep 30 17:30:00 crc kubenswrapper[4688]: E0930 17:30:00.579342 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" podUID="70e2e81b-be62-472a-8b70-d5aadace742e" Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.581456 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" event={"ID":"9e74b6a2-2c57-4514-9709-9db9b0a76d55","Type":"ContainerStarted","Data":"91c2f76fa8c22edfa74f684e30c628ba0cd05db643c21ab658111ee3bd93b7ee"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.581519 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" event={"ID":"9e74b6a2-2c57-4514-9709-9db9b0a76d55","Type":"ContainerStarted","Data":"d192219669d7a7e8e61bfb12d203646e567779832c194df1b81b5c1b7d08fe3b"} Sep 30 17:30:00 crc kubenswrapper[4688]: I0930 17:30:00.591391 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" podStartSLOduration=3.591362709 podStartE2EDuration="3.591362709s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:30:00.587767966 +0000 UTC m=+857.883205534" watchObservedRunningTime="2025-09-30 17:30:00.591362709 +0000 UTC m=+857.886800267" Sep 30 17:30:00 crc kubenswrapper[4688]: E0930 17:30:00.614635 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.212:5001/openstack-k8s-operators/ovn-operator:0ca9008685e7fe560ac65d84b9640c421a2338c2\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" podUID="9e74b6a2-2c57-4514-9709-9db9b0a76d55" Sep 30 17:30:01 crc kubenswrapper[4688]: I0930 17:30:01.275674 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k"] Sep 30 17:30:01 crc kubenswrapper[4688]: W0930 17:30:01.304508 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf29abb2_202c_41f6_89eb_e2a680ea4118.slice/crio-a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b WatchSource:0}: Error finding container a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b: Status 404 returned error can't find the container with id a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b Sep 30 17:30:01 crc kubenswrapper[4688]: I0930 17:30:01.336787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:30:01 crc kubenswrapper[4688]: I0930 17:30:01.344275 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/145c6edc-16b0-47ec-846b-638e354aa1dd-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-m24qn\" (UID: \"145c6edc-16b0-47ec-846b-638e354aa1dd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:30:01 crc kubenswrapper[4688]: I0930 17:30:01.500691 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:30:01 crc kubenswrapper[4688]: I0930 17:30:01.601003 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" event={"ID":"cf29abb2-202c-41f6-89eb-e2a680ea4118","Type":"ContainerStarted","Data":"a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b"} Sep 30 17:30:01 crc kubenswrapper[4688]: E0930 17:30:01.605511 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.212:5001/openstack-k8s-operators/ovn-operator:0ca9008685e7fe560ac65d84b9640c421a2338c2\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" podUID="9e74b6a2-2c57-4514-9709-9db9b0a76d55" Sep 30 17:30:01 crc kubenswrapper[4688]: E0930 17:30:01.605797 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3c6f7d737e0196ec302f44354228d783ad3b210a75703dda3b39c15c01a67e8c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" podUID="0768b406-8010-4804-8769-634ee21211ed" Sep 30 17:30:01 crc kubenswrapper[4688]: E0930 17:30:01.606048 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:4cdb30423c14ab48888aeeb699259bd9051284ec9f874ed9bab94c7965f45884\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" podUID="398df167-3c2c-416f-b77c-baafa00ce747" Sep 30 17:30:01 crc kubenswrapper[4688]: E0930 17:30:01.606800 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" podUID="f9796897-79b7-4273-8f85-f3c064a465af" Sep 30 17:30:01 crc kubenswrapper[4688]: E0930 17:30:01.608762 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" podUID="70e2e81b-be62-472a-8b70-d5aadace742e" Sep 30 17:30:02 crc kubenswrapper[4688]: I0930 17:30:02.173196 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn"] Sep 30 17:30:02 crc kubenswrapper[4688]: I0930 17:30:02.613397 4688 generic.go:334] "Generic (PLEG): container finished" podID="cf29abb2-202c-41f6-89eb-e2a680ea4118" containerID="a51cf1683c878550895a14176c941238c7fed2d4bf3e3eba55611554022e2ffb" exitCode=0 Sep 30 17:30:02 crc kubenswrapper[4688]: I0930 17:30:02.613465 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" event={"ID":"cf29abb2-202c-41f6-89eb-e2a680ea4118","Type":"ContainerDied","Data":"a51cf1683c878550895a14176c941238c7fed2d4bf3e3eba55611554022e2ffb"} Sep 30 17:30:03 crc kubenswrapper[4688]: I0930 17:30:03.623729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" event={"ID":"145c6edc-16b0-47ec-846b-638e354aa1dd","Type":"ContainerStarted","Data":"f9fc2dd2ee7f1d06060d84956f0ba90ca24277b68ffb9b29bfd2ef8ee1b4c641"} Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.607286 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.625614 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume\") pod \"cf29abb2-202c-41f6-89eb-e2a680ea4118\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.625951 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjv4w\" (UniqueName: \"kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w\") pod \"cf29abb2-202c-41f6-89eb-e2a680ea4118\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.626038 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume\") pod \"cf29abb2-202c-41f6-89eb-e2a680ea4118\" (UID: \"cf29abb2-202c-41f6-89eb-e2a680ea4118\") " Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.626977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf29abb2-202c-41f6-89eb-e2a680ea4118" (UID: "cf29abb2-202c-41f6-89eb-e2a680ea4118"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.627715 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf29abb2-202c-41f6-89eb-e2a680ea4118-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.634373 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf29abb2-202c-41f6-89eb-e2a680ea4118" (UID: "cf29abb2-202c-41f6-89eb-e2a680ea4118"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.636286 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" event={"ID":"cf29abb2-202c-41f6-89eb-e2a680ea4118","Type":"ContainerDied","Data":"a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b"} Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.636335 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23dc73784c3f04cd13053b3555204a11015f0c671ec3129fa987e481855495b" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.636365 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.636680 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w" (OuterVolumeSpecName: "kube-api-access-zjv4w") pod "cf29abb2-202c-41f6-89eb-e2a680ea4118" (UID: "cf29abb2-202c-41f6-89eb-e2a680ea4118"). InnerVolumeSpecName "kube-api-access-zjv4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.728913 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjv4w\" (UniqueName: \"kubernetes.io/projected/cf29abb2-202c-41f6-89eb-e2a680ea4118-kube-api-access-zjv4w\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:04 crc kubenswrapper[4688]: I0930 17:30:04.728966 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf29abb2-202c-41f6-89eb-e2a680ea4118-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:30:08 crc kubenswrapper[4688]: I0930 17:30:08.598797 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-89bc4fb57-2x7jt" Sep 30 17:30:13 crc kubenswrapper[4688]: E0930 17:30:13.507231 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f" Sep 30 17:30:13 crc kubenswrapper[4688]: E0930 17:30:13.507894 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d65hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-b8d54b5d7-g6d6b_openstack-operators(e2803e31-92d8-4071-8a82-84249fc01718): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:30:13 crc kubenswrapper[4688]: E0930 17:30:13.906560 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8" Sep 30 17:30:13 crc kubenswrapper[4688]: E0930 17:30:13.906822 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2sssw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-76fcc6dc7c-69zkg_openstack-operators(f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:30:14 crc kubenswrapper[4688]: E0930 17:30:14.339946 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f" Sep 30 17:30:14 crc kubenswrapper[4688]: E0930 17:30:14.340206 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:de99ad053f95f132f62b38335b2e8bf22fc28acbd441c3814764d63b63ef755f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh6kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-7d857cc749-zslc4_openstack-operators(8eba803d-c206-4418-a0b6-8594af2670aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.049590 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.050910 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_LIGHTSPEED_IMAGE_URL_DEFAULT,Value:quay.io/openstack-lightspeed/rag-content:os-docs-2024.2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw8mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6d776955-m24qn_openstack-operators(145c6edc-16b0-47ec-846b-638e354aa1dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.398220 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.398402 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpbw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-79d8469568-hxqmr_openstack-operators(a01178d9-6b72-442e-b455-e5625b5e154e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.399697 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" podUID="a01178d9-6b72-442e-b455-e5625b5e154e" Sep 30 17:30:23 crc kubenswrapper[4688]: E0930 17:30:23.797491 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" podUID="a01178d9-6b72-442e-b455-e5625b5e154e" Sep 30 17:30:25 crc kubenswrapper[4688]: E0930 17:30:25.305115 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" podUID="e2803e31-92d8-4071-8a82-84249fc01718" Sep 30 17:30:25 crc kubenswrapper[4688]: I0930 17:30:25.813968 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" event={"ID":"e2803e31-92d8-4071-8a82-84249fc01718","Type":"ContainerStarted","Data":"d65ccb3bd32b5ff7683269f4ba807b859a1a45ea01c42bcf0a4c8949015aea02"} Sep 30 17:30:26 crc kubenswrapper[4688]: E0930 17:30:26.017920 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" podUID="f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d" Sep 30 17:30:26 crc kubenswrapper[4688]: E0930 17:30:26.018731 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" podUID="8eba803d-c206-4418-a0b6-8594af2670aa" Sep 30 17:30:26 crc kubenswrapper[4688]: E0930 17:30:26.057689 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" podUID="145c6edc-16b0-47ec-846b-638e354aa1dd" Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.825076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" event={"ID":"8eba803d-c206-4418-a0b6-8594af2670aa","Type":"ContainerStarted","Data":"e0e8fd65c41504ea39ee83220e53dd360a89906e9173c47df47d05c4ac07fcff"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.827772 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" event={"ID":"7c01591d-ed02-4a38-b0ec-560d9b41cc8c","Type":"ContainerStarted","Data":"1a1f46ee2c93bf671c48abc872e155acd64a810b32ece733397d65418df27cde"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.831802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" event={"ID":"8a89103b-24b0-456d-88c5-f6abd68e75fb","Type":"ContainerStarted","Data":"0a8e1f7e6dd50d3b99f85a21dc8f933a723cf531321b7294562127c1b08bf1f1"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.834241 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" event={"ID":"b4b7968e-2dbf-463c-addf-f79bb82c5fb3","Type":"ContainerStarted","Data":"04c1393dcdff10f77d586e671faca46dbab9bc42de3f522238fc6fef229b3f56"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.835903 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" event={"ID":"26fa1afc-ed5f-4541-aab0-e003a11f653c","Type":"ContainerStarted","Data":"77eecc1c7d7a56c1eff159bd5a70b6aa5e1d3f39b845d5e33711b8823a4a5808"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.838439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" event={"ID":"ef1716f0-3b35-433b-8ad9-5127c151a43f","Type":"ContainerStarted","Data":"7c87466c00ab851adb679ada3b2d59b34cff20c193d3b2bad9d43664f8e2ca6a"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.839898 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" event={"ID":"1bea0309-4386-4a05-a980-1104ec2e37c2","Type":"ContainerStarted","Data":"f03db4d64591c21af7baaaa3df505c4d145c047655bf30d9a2f248c563f1759e"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.841904 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" event={"ID":"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d","Type":"ContainerStarted","Data":"9d144d804fb4e783194c9f5dddfbd7d40e740d8ef77f07949a5bcd33a20e026c"} Sep 30 17:30:26 crc kubenswrapper[4688]: I0930 17:30:26.847991 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" event={"ID":"145c6edc-16b0-47ec-846b-638e354aa1dd","Type":"ContainerStarted","Data":"a54fdf5233ddda02418a457fcd979748f47d8ede04eb2fbc12fae4c35e27031f"} Sep 30 17:30:26 crc kubenswrapper[4688]: E0930 17:30:26.850964 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" podUID="145c6edc-16b0-47ec-846b-638e354aa1dd" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.881312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" event={"ID":"f9796897-79b7-4273-8f85-f3c064a465af","Type":"ContainerStarted","Data":"dde39d1b1ad459c0f3e15aac9e9971542baccfabe5847911b79def5ff97927f6"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.882857 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.883755 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" event={"ID":"ef1716f0-3b35-433b-8ad9-5127c151a43f","Type":"ContainerStarted","Data":"057e4482bda37475f92f7fc6ec7662cf2cd8e43cad72a96ac88bfa69d8e3878f"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.884824 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.891230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" event={"ID":"398df167-3c2c-416f-b77c-baafa00ce747","Type":"ContainerStarted","Data":"55157a96669ec47eb2b34a48e8a9f605983750fea42c88dd056053d495eed7c2"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.891551 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.904048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" event={"ID":"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d","Type":"ContainerStarted","Data":"16b09622b3fedc179f7a7145d64ad9996d6a2f5a61fdc394a093ad702a79c891"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.911464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" event={"ID":"0768b406-8010-4804-8769-634ee21211ed","Type":"ContainerStarted","Data":"2258c6ee926e4091527093f8668805da937cf9c169c38e8063cfaa3793d56b18"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.912484 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.921807 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" event={"ID":"16f6e3cb-02d6-4820-953b-a728440b812f","Type":"ContainerStarted","Data":"f6d3b5d847576a44bee6238ce875a6acab40a5bc0e3c8ca0d88582f7d00bd69c"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.934256 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" podStartSLOduration=5.586966294 podStartE2EDuration="31.934228396s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.36764284 +0000 UTC m=+856.663080398" lastFinishedPulling="2025-09-30 17:30:25.714904932 +0000 UTC m=+883.010342500" observedRunningTime="2025-09-30 17:30:27.927607845 +0000 UTC m=+885.223045403" watchObservedRunningTime="2025-09-30 17:30:27.934228396 +0000 UTC m=+885.229665954" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.949616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" event={"ID":"70e2e81b-be62-472a-8b70-d5aadace742e","Type":"ContainerStarted","Data":"a2bd5a5c7e5e82f22577299007fcf65f52bfc5425a027c90436b88cd3215159b"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.951002 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.955445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" event={"ID":"e2803e31-92d8-4071-8a82-84249fc01718","Type":"ContainerStarted","Data":"b95dd92aac2a669a19916bb9854447b6c5dbb25877202c5945672106d19be05d"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.955870 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.958546 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" event={"ID":"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e","Type":"ContainerStarted","Data":"d844856277bf426cfcbb5296a9ba0f313ed1f29f688c9ea6e05bc4bd26af5a69"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.960439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" event={"ID":"9e74b6a2-2c57-4514-9709-9db9b0a76d55","Type":"ContainerStarted","Data":"13551457ebfd06e49a9757e76e06c90c91e32ce91c5be56161ad756f855a6fe5"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.961278 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.973053 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" podStartSLOduration=5.481051229 podStartE2EDuration="31.973027289s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.353236108 +0000 UTC m=+856.648673666" lastFinishedPulling="2025-09-30 17:30:25.845212168 +0000 UTC m=+883.140649726" observedRunningTime="2025-09-30 17:30:27.965266388 +0000 UTC m=+885.260703956" watchObservedRunningTime="2025-09-30 17:30:27.973027289 +0000 UTC m=+885.268464847" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.975843 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" event={"ID":"b4b7968e-2dbf-463c-addf-f79bb82c5fb3","Type":"ContainerStarted","Data":"c87975278522c64831cf3304851e0df7e28b183958d06bc775b44442cfe4db86"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.976081 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.978951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" event={"ID":"19e2aa18-5fe3-4451-8ca1-303b87a5e131","Type":"ContainerStarted","Data":"ba64919f9b536f6f42b89dd630883706522464791704f9d4e52b2b45742d6989"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.980677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" event={"ID":"ef369140-3b22-4cd7-926f-525ebde98f1a","Type":"ContainerStarted","Data":"d0be69bb4ed7fc37143cc8b3c41d0b8fcfd08df093505a8aa877357e293555d4"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.982527 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" event={"ID":"8a89103b-24b0-456d-88c5-f6abd68e75fb","Type":"ContainerStarted","Data":"247b0a38d0a303163400338a1991a3782bf7f3b4efc6e670204eb6d3cf41f93b"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.989196 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" event={"ID":"26fa1afc-ed5f-4541-aab0-e003a11f653c","Type":"ContainerStarted","Data":"0de623e76ebe3a03d7553eab0f6a0181227b007da2fc2166ee640017895bf1de"} Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.990275 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:30:27 crc kubenswrapper[4688]: I0930 17:30:27.991346 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" podStartSLOduration=4.621807534 podStartE2EDuration="30.991304831s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.378198912 +0000 UTC m=+856.673636480" lastFinishedPulling="2025-09-30 17:30:25.747696219 +0000 UTC m=+883.043133777" observedRunningTime="2025-09-30 17:30:27.98894397 +0000 UTC m=+885.284381528" watchObservedRunningTime="2025-09-30 17:30:27.991304831 +0000 UTC m=+885.286742389" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.002818 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" event={"ID":"940175df-0d37-4540-b0c7-a81bc3261ba6","Type":"ContainerStarted","Data":"d2c0f0117f043bc685f78f5a0a70cd6b032f8e9eacf84ccee0cf518497d0f980"} Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.015403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" event={"ID":"1bea0309-4386-4a05-a980-1104ec2e37c2","Type":"ContainerStarted","Data":"af89930006d1ed30d7b252838c0fa3c77e00cb08e0bedb5033f1e1a76f5c1474"} Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.016506 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.032040 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" podStartSLOduration=7.12841079 podStartE2EDuration="32.032011262s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.476680276 +0000 UTC m=+855.772117844" lastFinishedPulling="2025-09-30 17:30:23.380280738 +0000 UTC m=+880.675718316" observedRunningTime="2025-09-30 17:30:28.027018383 +0000 UTC m=+885.322455961" watchObservedRunningTime="2025-09-30 17:30:28.032011262 +0000 UTC m=+885.327448820" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.054613 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" podStartSLOduration=5.920773387 podStartE2EDuration="32.054581895s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.461076273 +0000 UTC m=+855.756513831" lastFinishedPulling="2025-09-30 17:30:24.594884781 +0000 UTC m=+881.890322339" observedRunningTime="2025-09-30 17:30:28.044635108 +0000 UTC m=+885.340072686" watchObservedRunningTime="2025-09-30 17:30:28.054581895 +0000 UTC m=+885.350019453" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.074386 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" podStartSLOduration=4.767542458 podStartE2EDuration="31.074361226s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.426013527 +0000 UTC m=+856.721451085" lastFinishedPulling="2025-09-30 17:30:25.732832295 +0000 UTC m=+883.028269853" observedRunningTime="2025-09-30 17:30:28.071357408 +0000 UTC m=+885.366794976" watchObservedRunningTime="2025-09-30 17:30:28.074361226 +0000 UTC m=+885.369798784" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.099916 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" podStartSLOduration=6.939277495 podStartE2EDuration="32.099885405s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.217935043 +0000 UTC m=+855.513372601" lastFinishedPulling="2025-09-30 17:30:23.378542953 +0000 UTC m=+880.673980511" observedRunningTime="2025-09-30 17:30:28.097599596 +0000 UTC m=+885.393037154" watchObservedRunningTime="2025-09-30 17:30:28.099885405 +0000 UTC m=+885.395322963" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.163285 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" podStartSLOduration=4.30808702 podStartE2EDuration="31.163253732s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.378237823 +0000 UTC m=+856.673675381" lastFinishedPulling="2025-09-30 17:30:26.233404525 +0000 UTC m=+883.528842093" observedRunningTime="2025-09-30 17:30:28.117261274 +0000 UTC m=+885.412698852" watchObservedRunningTime="2025-09-30 17:30:28.163253732 +0000 UTC m=+885.458691300" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.177617 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" podStartSLOduration=3.797162383 podStartE2EDuration="31.177584482s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.340422067 +0000 UTC m=+856.635859625" lastFinishedPulling="2025-09-30 17:30:26.720844166 +0000 UTC m=+884.016281724" observedRunningTime="2025-09-30 17:30:28.167103602 +0000 UTC m=+885.462541180" watchObservedRunningTime="2025-09-30 17:30:28.177584482 +0000 UTC m=+885.473022050" Sep 30 17:30:28 crc kubenswrapper[4688]: I0930 17:30:28.245645 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" podStartSLOduration=5.987179702 podStartE2EDuration="31.24562207s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.334206136 +0000 UTC m=+856.629643704" lastFinishedPulling="2025-09-30 17:30:24.592648514 +0000 UTC m=+881.888086072" observedRunningTime="2025-09-30 17:30:28.241185375 +0000 UTC m=+885.536622933" watchObservedRunningTime="2025-09-30 17:30:28.24562207 +0000 UTC m=+885.541059628" Sep 30 17:30:28 crc kubenswrapper[4688]: E0930 17:30:28.785629 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e3f947e9034a951620a76eaf41ceec95eefcef0eacb251b10993d6820d5e1af6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" podUID="145c6edc-16b0-47ec-846b-638e354aa1dd" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.024760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" event={"ID":"7c01591d-ed02-4a38-b0ec-560d9b41cc8c","Type":"ContainerStarted","Data":"dabbe0ba9408d66275162f9761c0de165085b0200c28ff194092f01d85fd16ae"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.024861 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.027588 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" event={"ID":"16f6e3cb-02d6-4820-953b-a728440b812f","Type":"ContainerStarted","Data":"60890a42f5c8a4cf02036abdeb3f7c5cdb1e3cc3bb5ed33df9250ebc6b4c9444"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.027716 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.029720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" event={"ID":"8eba803d-c206-4418-a0b6-8594af2670aa","Type":"ContainerStarted","Data":"3ba65db25c819b44a98a26eb264f2a8e1c09cda8d234d8cc3dbc135e29e91c58"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.029900 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.031718 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" event={"ID":"940175df-0d37-4540-b0c7-a81bc3261ba6","Type":"ContainerStarted","Data":"c9c0cf6cc21f05e70229977dab2fc0d915ec730e9fda0d2af25371e9dd23555f"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.031892 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.034423 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" event={"ID":"f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d","Type":"ContainerStarted","Data":"966b5c0f65000a32087ea869b7c5d28de12838a3d6830c7efa37caf9fa118ea0"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.034575 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.036994 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" event={"ID":"505c7e1c-5b6c-4736-8a1f-93ed5a92b47e","Type":"ContainerStarted","Data":"7a06f7b5437808a1b1d4816c396f16a3fea30c7f2b68f6e85d1d0a291715bc1f"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.037545 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.038822 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" event={"ID":"08d7f313-71b6-4d2e-b67d-c9e1ddaca02d","Type":"ContainerStarted","Data":"9b6f5208f3428a4a49d3463589304b3dc129e0ed023051d63e72df73ae91dd90"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.039163 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.040781 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" event={"ID":"19e2aa18-5fe3-4451-8ca1-303b87a5e131","Type":"ContainerStarted","Data":"04d999b6c1b6e7c6e0d5ad8645c23ac7286c8bc52289d8fbf25838b6af04e7fe"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.041165 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.043793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" event={"ID":"ef369140-3b22-4cd7-926f-525ebde98f1a","Type":"ContainerStarted","Data":"4e5c9ad17e7c3ee3af41c57fc596ac05847506f09b269dbc1a6dfc65003e684a"} Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.046953 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" podStartSLOduration=7.131029948 podStartE2EDuration="33.046931718s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.680399738 +0000 UTC m=+855.975837296" lastFinishedPulling="2025-09-30 17:30:24.596301508 +0000 UTC m=+881.891739066" observedRunningTime="2025-09-30 17:30:29.043882989 +0000 UTC m=+886.339320547" watchObservedRunningTime="2025-09-30 17:30:29.046931718 +0000 UTC m=+886.342369276" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.065977 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" podStartSLOduration=6.68785377 podStartE2EDuration="32.065961939s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.215711425 +0000 UTC m=+856.511148983" lastFinishedPulling="2025-09-30 17:30:24.593819594 +0000 UTC m=+881.889257152" observedRunningTime="2025-09-30 17:30:29.064885551 +0000 UTC m=+886.360323109" watchObservedRunningTime="2025-09-30 17:30:29.065961939 +0000 UTC m=+886.361399497" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.092978 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" podStartSLOduration=3.8295199909999997 podStartE2EDuration="32.092957507s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.313491261 +0000 UTC m=+856.608928819" lastFinishedPulling="2025-09-30 17:30:27.576928767 +0000 UTC m=+884.872366335" observedRunningTime="2025-09-30 17:30:29.089924898 +0000 UTC m=+886.385362466" watchObservedRunningTime="2025-09-30 17:30:29.092957507 +0000 UTC m=+886.388395065" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.112136 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" podStartSLOduration=7.20356691 podStartE2EDuration="33.112117971s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.687657285 +0000 UTC m=+855.983094843" lastFinishedPulling="2025-09-30 17:30:24.596208346 +0000 UTC m=+881.891645904" observedRunningTime="2025-09-30 17:30:29.107849171 +0000 UTC m=+886.403286729" watchObservedRunningTime="2025-09-30 17:30:29.112117971 +0000 UTC m=+886.407555529" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.146084 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" podStartSLOduration=7.62240814 podStartE2EDuration="32.146059498s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.857716488 +0000 UTC m=+856.153154046" lastFinishedPulling="2025-09-30 17:30:23.381367846 +0000 UTC m=+880.676805404" observedRunningTime="2025-09-30 17:30:29.138505173 +0000 UTC m=+886.433942751" watchObservedRunningTime="2025-09-30 17:30:29.146059498 +0000 UTC m=+886.441497056" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.166269 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" podStartSLOduration=7.418070982 podStartE2EDuration="33.16625032s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.847544575 +0000 UTC m=+856.142982133" lastFinishedPulling="2025-09-30 17:30:24.595723913 +0000 UTC m=+881.891161471" observedRunningTime="2025-09-30 17:30:29.160717307 +0000 UTC m=+886.456154885" watchObservedRunningTime="2025-09-30 17:30:29.16625032 +0000 UTC m=+886.461687878" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.185265 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" podStartSLOduration=3.875767055 podStartE2EDuration="33.18524267s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.485606627 +0000 UTC m=+856.781044185" lastFinishedPulling="2025-09-30 17:30:28.795082242 +0000 UTC m=+886.090519800" observedRunningTime="2025-09-30 17:30:29.180770625 +0000 UTC m=+886.476208173" watchObservedRunningTime="2025-09-30 17:30:29.18524267 +0000 UTC m=+886.480680228" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.214949 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" podStartSLOduration=7.399658297 podStartE2EDuration="33.214925647s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.813837455 +0000 UTC m=+856.109275013" lastFinishedPulling="2025-09-30 17:30:24.629104805 +0000 UTC m=+881.924542363" observedRunningTime="2025-09-30 17:30:29.204787425 +0000 UTC m=+886.500225003" watchObservedRunningTime="2025-09-30 17:30:29.214925647 +0000 UTC m=+886.510363205" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.236899 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" podStartSLOduration=6.478981855 podStartE2EDuration="32.236878764s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.837383063 +0000 UTC m=+856.132820621" lastFinishedPulling="2025-09-30 17:30:24.595279972 +0000 UTC m=+881.890717530" observedRunningTime="2025-09-30 17:30:29.233539788 +0000 UTC m=+886.528977346" watchObservedRunningTime="2025-09-30 17:30:29.236878764 +0000 UTC m=+886.532316312" Sep 30 17:30:29 crc kubenswrapper[4688]: I0930 17:30:29.259626 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" podStartSLOduration=7.294545112 podStartE2EDuration="33.259596401s" podCreationTimestamp="2025-09-30 17:29:56 +0000 UTC" firstStartedPulling="2025-09-30 17:29:58.630406877 +0000 UTC m=+855.925844435" lastFinishedPulling="2025-09-30 17:30:24.595458166 +0000 UTC m=+881.890895724" observedRunningTime="2025-09-30 17:30:29.254189481 +0000 UTC m=+886.549627049" watchObservedRunningTime="2025-09-30 17:30:29.259596401 +0000 UTC m=+886.555033959" Sep 30 17:30:30 crc kubenswrapper[4688]: I0930 17:30:30.051415 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:30:31 crc kubenswrapper[4688]: I0930 17:30:31.060689 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-t64tg" Sep 30 17:30:31 crc kubenswrapper[4688]: I0930 17:30:31.060778 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-xnhgk" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.112901 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-6ng88" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.125106 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-zrrc8" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.182657 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-98rhq" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.273453 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-vgshh" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.295495 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.302142 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-bklgp" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.494100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-7l8bg" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.519213 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-7tgsx" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.576263 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-88c7-886sw" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.653284 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-h8tzm" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.683172 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-q5q9z" Sep 30 17:30:37 crc kubenswrapper[4688]: I0930 17:30:37.740101 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-69zkg" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.005242 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5ddfc99c65-m7slc" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.078040 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-rd5sx" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.105126 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-wkxsn" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.142085 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-f66b554c6-cdx5c" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.199962 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-g6d6b" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.223423 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-f2nzh" Sep 30 17:30:38 crc kubenswrapper[4688]: I0930 17:30:38.803792 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-zslc4" Sep 30 17:30:40 crc kubenswrapper[4688]: I0930 17:30:40.139786 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" event={"ID":"a01178d9-6b72-442e-b455-e5625b5e154e","Type":"ContainerStarted","Data":"8d009fcbf35c14b9b98d42f362bf9549f14e7f12d9637d50ef80fde24fb4e276"} Sep 30 17:30:40 crc kubenswrapper[4688]: I0930 17:30:40.163089 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-hxqmr" podStartSLOduration=3.388428926 podStartE2EDuration="43.163065156s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:29:59.302823855 +0000 UTC m=+856.598261413" lastFinishedPulling="2025-09-30 17:30:39.077460075 +0000 UTC m=+896.372897643" observedRunningTime="2025-09-30 17:30:40.15934304 +0000 UTC m=+897.454780618" watchObservedRunningTime="2025-09-30 17:30:40.163065156 +0000 UTC m=+897.458502714" Sep 30 17:30:41 crc kubenswrapper[4688]: I0930 17:30:41.149705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" event={"ID":"145c6edc-16b0-47ec-846b-638e354aa1dd","Type":"ContainerStarted","Data":"927701b17cb5a250c83878a04cb9c65e84f0f3ce667295ca636bf023c96e3c79"} Sep 30 17:30:41 crc kubenswrapper[4688]: I0930 17:30:41.149932 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:30:41 crc kubenswrapper[4688]: I0930 17:30:41.181243 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" podStartSLOduration=6.11992288 podStartE2EDuration="44.181225425s" podCreationTimestamp="2025-09-30 17:29:57 +0000 UTC" firstStartedPulling="2025-09-30 17:30:02.871764731 +0000 UTC m=+860.167202289" lastFinishedPulling="2025-09-30 17:30:40.933067266 +0000 UTC m=+898.228504834" observedRunningTime="2025-09-30 17:30:41.178399472 +0000 UTC m=+898.473837030" watchObservedRunningTime="2025-09-30 17:30:41.181225425 +0000 UTC m=+898.476662983" Sep 30 17:30:51 crc kubenswrapper[4688]: I0930 17:30:51.507100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-m24qn" Sep 30 17:30:52 crc kubenswrapper[4688]: I0930 17:30:52.545652 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:30:52 crc kubenswrapper[4688]: I0930 17:30:52.545738 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.408618 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:07 crc kubenswrapper[4688]: E0930 17:31:07.409796 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf29abb2-202c-41f6-89eb-e2a680ea4118" containerName="collect-profiles" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.409817 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf29abb2-202c-41f6-89eb-e2a680ea4118" containerName="collect-profiles" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.410076 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf29abb2-202c-41f6-89eb-e2a680ea4118" containerName="collect-profiles" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.411047 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.414861 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pthhf" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.415674 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.416157 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.416442 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.443876 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.487112 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.489379 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.492085 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.515101 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.544715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnbg\" (UniqueName: \"kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.544832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.646774 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdm5\" (UniqueName: \"kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.646864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnbg\" (UniqueName: \"kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.646915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.646942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.647560 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.648447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.669304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnbg\" (UniqueName: \"kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg\") pod \"dnsmasq-dns-675f4bcbfc-m7cfk\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.736970 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.748901 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.748958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.748991 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdm5\" (UniqueName: \"kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.756808 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.757037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.774489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdm5\" (UniqueName: \"kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5\") pod \"dnsmasq-dns-78dd6ddcc-z4ppk\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:07 crc kubenswrapper[4688]: I0930 17:31:07.809842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:08 crc kubenswrapper[4688]: I0930 17:31:08.053765 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:08 crc kubenswrapper[4688]: I0930 17:31:08.061092 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:31:08 crc kubenswrapper[4688]: I0930 17:31:08.131417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:08 crc kubenswrapper[4688]: I0930 17:31:08.362114 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" event={"ID":"84ae9a11-6506-4a11-9e40-4750d30cccd7","Type":"ContainerStarted","Data":"9d076503a2901466b845b9145227a8ebd2eb016f6ebcf565cd0044b9a3362443"} Sep 30 17:31:08 crc kubenswrapper[4688]: I0930 17:31:08.363503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" event={"ID":"1df8c144-75e4-40e2-a33a-decc9ce63c67","Type":"ContainerStarted","Data":"459b428e68368853651047113c253061f685c5ed179ad9adee79b465d9a4443f"} Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.656842 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.684956 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.686358 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.716112 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.806318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.806402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6b5x\" (UniqueName: \"kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.806481 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.908314 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.908386 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6b5x\" (UniqueName: \"kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.908429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.909760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.911213 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:10 crc kubenswrapper[4688]: I0930 17:31:10.946717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6b5x\" (UniqueName: \"kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x\") pod \"dnsmasq-dns-666b6646f7-xzmjh\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.006793 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.027317 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.049076 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.051661 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.073562 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.111351 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.111405 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.111745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsk2c\" (UniqueName: \"kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.213804 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.213872 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.213983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsk2c\" (UniqueName: \"kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.214947 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.215176 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.235253 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsk2c\" (UniqueName: \"kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c\") pod \"dnsmasq-dns-57d769cc4f-rbzgt\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.382948 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.850782 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.853076 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.863930 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.864238 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.864350 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.864523 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.864833 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.864991 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9d988" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.865120 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.890739 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932739 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932807 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932932 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.932981 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.933009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.933028 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.933053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.933082 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjv2\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:11 crc kubenswrapper[4688]: I0930 17:31:11.933102 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.034809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.034876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.034936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035002 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035036 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035134 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjv2\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035160 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035195 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.035972 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.036119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.036330 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.037083 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.037160 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.037241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.037345 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.046689 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.049595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.057498 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.068424 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.086317 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjv2\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.101816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.169508 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.172494 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175065 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175391 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175526 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nzrpd" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175664 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175738 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175887 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.175936 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.193323 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.199458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240654 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240769 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240807 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240829 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240869 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.240895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.241242 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.241277 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jczdd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.241303 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.241374 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342797 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342897 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jczdd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342944 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.342984 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343019 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343073 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.343654 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.344009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.344935 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.348952 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.349438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.350322 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.353650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.355627 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.362500 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.362895 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.373425 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.373857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jczdd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:12 crc kubenswrapper[4688]: I0930 17:31:12.533548 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.569443 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.571666 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.577494 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.577523 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.577674 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.578464 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6szrj" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.578705 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.592269 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.594721 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.596395 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8mw\" (UniqueName: \"kubernetes.io/projected/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kube-api-access-4n8mw\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.596477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.596633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.596681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-default\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.596919 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kolla-config\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.597029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.597210 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-secrets\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.597259 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.597374 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.698483 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699532 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-default\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699589 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699646 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kolla-config\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699680 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-secrets\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699798 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.699829 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8mw\" (UniqueName: \"kubernetes.io/projected/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kube-api-access-4n8mw\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.700370 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.700704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.701156 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kolla-config\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.701269 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-config-data-default\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.702871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.706221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.719341 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.720223 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8mw\" (UniqueName: \"kubernetes.io/projected/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-kube-api-access-4n8mw\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.722386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/16d86c38-a2f6-4f33-8a3f-d66ff12bdb38-secrets\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.725177 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38\") " pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.930784 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.984848 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.986936 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.990672 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.990892 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.991818 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kshr7" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.991966 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Sep 30 17:31:14 crc kubenswrapper[4688]: I0930 17:31:14.992241 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107326 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbncn\" (UniqueName: \"kubernetes.io/projected/47da376c-d470-460a-90a1-00e696c83821-kube-api-access-hbncn\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107461 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107496 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107603 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/47da376c-d470-460a-90a1-00e696c83821-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107684 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.107710 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212656 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212690 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212763 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/47da376c-d470-460a-90a1-00e696c83821-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212811 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.212842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbncn\" (UniqueName: \"kubernetes.io/projected/47da376c-d470-460a-90a1-00e696c83821-kube-api-access-hbncn\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.213424 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.214924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/47da376c-d470-460a-90a1-00e696c83821-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.215123 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.215131 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.215504 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/47da376c-d470-460a-90a1-00e696c83821-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.221485 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.224959 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.225680 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/47da376c-d470-460a-90a1-00e696c83821-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.238943 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbncn\" (UniqueName: \"kubernetes.io/projected/47da376c-d470-460a-90a1-00e696c83821-kube-api-access-hbncn\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.282236 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"47da376c-d470-460a-90a1-00e696c83821\") " pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.346018 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.395680 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.397214 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.400551 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rj6bw" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.401061 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.401476 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.416976 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.524733 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.524806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64s7\" (UniqueName: \"kubernetes.io/projected/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kube-api-access-r64s7\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.524864 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-config-data\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.524922 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kolla-config\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.524987 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.627142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.627278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.627305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64s7\" (UniqueName: \"kubernetes.io/projected/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kube-api-access-r64s7\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.627372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-config-data\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.627416 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kolla-config\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.628548 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kolla-config\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.629356 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1988f233-6da1-4c36-acbc-33f7dbaab1b0-config-data\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.632093 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.632549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1988f233-6da1-4c36-acbc-33f7dbaab1b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.654363 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64s7\" (UniqueName: \"kubernetes.io/projected/1988f233-6da1-4c36-acbc-33f7dbaab1b0-kube-api-access-r64s7\") pod \"memcached-0\" (UID: \"1988f233-6da1-4c36-acbc-33f7dbaab1b0\") " pod="openstack/memcached-0" Sep 30 17:31:15 crc kubenswrapper[4688]: I0930 17:31:15.722124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.454018 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.455526 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.466918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zlp9h" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.481239 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.558777 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cqq\" (UniqueName: \"kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq\") pod \"kube-state-metrics-0\" (UID: \"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3\") " pod="openstack/kube-state-metrics-0" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.661739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cqq\" (UniqueName: \"kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq\") pod \"kube-state-metrics-0\" (UID: \"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3\") " pod="openstack/kube-state-metrics-0" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.693213 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cqq\" (UniqueName: \"kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq\") pod \"kube-state-metrics-0\" (UID: \"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3\") " pod="openstack/kube-state-metrics-0" Sep 30 17:31:17 crc kubenswrapper[4688]: I0930 17:31:17.792919 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.729931 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-v6d9x"] Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.731454 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.734233 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-r265f" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.738971 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.739519 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.740732 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-v6d9x"] Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.768947 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dsjj5"] Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.771144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.803690 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dsjj5"] Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813039 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a55636-8e3a-49bf-8c31-2980f614fd65-scripts\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813274 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-log-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813465 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813716 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdl4b\" (UniqueName: \"kubernetes.io/projected/a5a55636-8e3a-49bf-8c31-2980f614fd65-kube-api-access-rdl4b\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.813980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-ovn-controller-tls-certs\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.814052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-combined-ca-bundle\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915787 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszhp\" (UniqueName: \"kubernetes.io/projected/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-kube-api-access-gszhp\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-log-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915881 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-run\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdl4b\" (UniqueName: \"kubernetes.io/projected/a5a55636-8e3a-49bf-8c31-2980f614fd65-kube-api-access-rdl4b\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.915986 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-etc-ovs\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-lib\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916050 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-ovn-controller-tls-certs\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916073 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-combined-ca-bundle\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-scripts\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916112 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a55636-8e3a-49bf-8c31-2980f614fd65-scripts\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-log\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.916697 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.917394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-log-ovn\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.917486 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a5a55636-8e3a-49bf-8c31-2980f614fd65-var-run\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.922264 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a5a55636-8e3a-49bf-8c31-2980f614fd65-scripts\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.925894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-ovn-controller-tls-certs\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.929717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5a55636-8e3a-49bf-8c31-2980f614fd65-combined-ca-bundle\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:20 crc kubenswrapper[4688]: I0930 17:31:20.933301 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdl4b\" (UniqueName: \"kubernetes.io/projected/a5a55636-8e3a-49bf-8c31-2980f614fd65-kube-api-access-rdl4b\") pod \"ovn-controller-v6d9x\" (UID: \"a5a55636-8e3a-49bf-8c31-2980f614fd65\") " pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017534 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-lib\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017617 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-etc-ovs\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017689 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-scripts\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017713 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-log\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017799 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszhp\" (UniqueName: \"kubernetes.io/projected/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-kube-api-access-gszhp\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.017826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-run\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.018546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-lib\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.018845 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-etc-ovs\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.018955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-log\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.019433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-var-run\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.021557 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-scripts\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.039028 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszhp\" (UniqueName: \"kubernetes.io/projected/a4a05b21-e8c9-45f9-bf12-56c74c0672fd-kube-api-access-gszhp\") pod \"ovn-controller-ovs-dsjj5\" (UID: \"a4a05b21-e8c9-45f9-bf12-56c74c0672fd\") " pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.055767 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.090752 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.122836 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.124323 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.126423 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.126455 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.131645 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.131655 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.131803 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sb6hv" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.141930 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221191 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7l2\" (UniqueName: \"kubernetes.io/projected/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-kube-api-access-lq7l2\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221321 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221345 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221493 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221561 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.221741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323722 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323775 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7l2\" (UniqueName: \"kubernetes.io/projected/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-kube-api-access-lq7l2\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323811 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.323954 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.324048 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.324069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.324938 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.325047 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.326546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.333298 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.333292 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.334839 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.339591 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.344958 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7l2\" (UniqueName: \"kubernetes.io/projected/2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de-kube-api-access-lq7l2\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.349969 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de\") " pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:21 crc kubenswrapper[4688]: I0930 17:31:21.456617 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:22 crc kubenswrapper[4688]: I0930 17:31:22.545308 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:31:22 crc kubenswrapper[4688]: I0930 17:31:22.545799 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.503492 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.505148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.507533 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.507881 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dldwq" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.508052 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.508222 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.518137 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575194 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575270 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-config\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575427 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575608 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575769 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjp8\" (UniqueName: \"kubernetes.io/projected/44203c2b-b94c-41a5-908c-a79aa2790bfc-kube-api-access-2fjp8\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.575842 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.679387 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.679867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.680000 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.680115 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-config\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.681591 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.681594 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-config\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.680917 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.682709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44203c2b-b94c-41a5-908c-a79aa2790bfc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.685734 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.686309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.701936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.702012 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.702116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjp8\" (UniqueName: \"kubernetes.io/projected/44203c2b-b94c-41a5-908c-a79aa2790bfc-kube-api-access-2fjp8\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.703218 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44203c2b-b94c-41a5-908c-a79aa2790bfc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.708077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44203c2b-b94c-41a5-908c-a79aa2790bfc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.726981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.727880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjp8\" (UniqueName: \"kubernetes.io/projected/44203c2b-b94c-41a5-908c-a79aa2790bfc-kube-api-access-2fjp8\") pod \"ovsdbserver-sb-0\" (UID: \"44203c2b-b94c-41a5-908c-a79aa2790bfc\") " pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:24 crc kubenswrapper[4688]: I0930 17:31:24.869178 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:25 crc kubenswrapper[4688]: I0930 17:31:25.098173 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:25 crc kubenswrapper[4688]: W0930 17:31:25.523449 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04001a2f_4bd6_4432_bbe0_e21db51a90df.slice/crio-25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0 WatchSource:0}: Error finding container 25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0: Status 404 returned error can't find the container with id 25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0 Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.524663 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.524845 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcnbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-m7cfk_openstack(84ae9a11-6506-4a11-9e40-4750d30cccd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.526630 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" podUID="84ae9a11-6506-4a11-9e40-4750d30cccd7" Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.544717 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.544917 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmdm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-z4ppk_openstack(1df8c144-75e4-40e2-a33a-decc9ce63c67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:31:25 crc kubenswrapper[4688]: E0930 17:31:25.546145 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" podUID="1df8c144-75e4-40e2-a33a-decc9ce63c67" Sep 30 17:31:25 crc kubenswrapper[4688]: I0930 17:31:25.667834 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" event={"ID":"04001a2f-4bd6-4432-bbe0-e21db51a90df","Type":"ContainerStarted","Data":"25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.254281 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.273172 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.342821 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmdm5\" (UniqueName: \"kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5\") pod \"1df8c144-75e4-40e2-a33a-decc9ce63c67\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.343469 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcnbg\" (UniqueName: \"kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg\") pod \"84ae9a11-6506-4a11-9e40-4750d30cccd7\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.343544 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config\") pod \"1df8c144-75e4-40e2-a33a-decc9ce63c67\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.343626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc\") pod \"1df8c144-75e4-40e2-a33a-decc9ce63c67\" (UID: \"1df8c144-75e4-40e2-a33a-decc9ce63c67\") " Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.343750 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config\") pod \"84ae9a11-6506-4a11-9e40-4750d30cccd7\" (UID: \"84ae9a11-6506-4a11-9e40-4750d30cccd7\") " Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.344347 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config" (OuterVolumeSpecName: "config") pod "84ae9a11-6506-4a11-9e40-4750d30cccd7" (UID: "84ae9a11-6506-4a11-9e40-4750d30cccd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.344360 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1df8c144-75e4-40e2-a33a-decc9ce63c67" (UID: "1df8c144-75e4-40e2-a33a-decc9ce63c67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.344865 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config" (OuterVolumeSpecName: "config") pod "1df8c144-75e4-40e2-a33a-decc9ce63c67" (UID: "1df8c144-75e4-40e2-a33a-decc9ce63c67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.349596 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5" (OuterVolumeSpecName: "kube-api-access-pmdm5") pod "1df8c144-75e4-40e2-a33a-decc9ce63c67" (UID: "1df8c144-75e4-40e2-a33a-decc9ce63c67"). InnerVolumeSpecName "kube-api-access-pmdm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.350379 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg" (OuterVolumeSpecName: "kube-api-access-gcnbg") pod "84ae9a11-6506-4a11-9e40-4750d30cccd7" (UID: "84ae9a11-6506-4a11-9e40-4750d30cccd7"). InnerVolumeSpecName "kube-api-access-gcnbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.446690 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmdm5\" (UniqueName: \"kubernetes.io/projected/1df8c144-75e4-40e2-a33a-decc9ce63c67-kube-api-access-pmdm5\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.446743 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcnbg\" (UniqueName: \"kubernetes.io/projected/84ae9a11-6506-4a11-9e40-4750d30cccd7-kube-api-access-gcnbg\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.446760 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.446779 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1df8c144-75e4-40e2-a33a-decc9ce63c67-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.446794 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae9a11-6506-4a11-9e40-4750d30cccd7-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.630562 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: W0930 17:31:26.652042 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cfbeb46_f5ad_46f6_93e0_72c296703b5c.slice/crio-a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188 WatchSource:0}: Error finding container a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188: Status 404 returned error can't find the container with id a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188 Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.658970 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-v6d9x"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.671782 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.677393 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.684961 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" event={"ID":"1df8c144-75e4-40e2-a33a-decc9ce63c67","Type":"ContainerDied","Data":"459b428e68368853651047113c253061f685c5ed179ad9adee79b465d9a4443f"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.684987 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z4ppk" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.698556 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.702899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerStarted","Data":"a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.707424 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.707492 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-v6d9x" event={"ID":"a5a55636-8e3a-49bf-8c31-2980f614fd65","Type":"ContainerStarted","Data":"ef1f120ad505f6fd17b2db59d11775c5eae348a411b9526d17bd0b667f33002f"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.710785 4688 generic.go:334] "Generic (PLEG): container finished" podID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerID="c4e2a56030e394ff769ac93b7c123a2c6954310c786ae01ad6c306411285a3da" exitCode=0 Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.710854 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" event={"ID":"04001a2f-4bd6-4432-bbe0-e21db51a90df","Type":"ContainerDied","Data":"c4e2a56030e394ff769ac93b7c123a2c6954310c786ae01ad6c306411285a3da"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.714047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" event={"ID":"84ae9a11-6506-4a11-9e40-4750d30cccd7","Type":"ContainerDied","Data":"9d076503a2901466b845b9145227a8ebd2eb016f6ebcf565cd0044b9a3362443"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.714169 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-m7cfk" Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.716650 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerStarted","Data":"8e31b3533244a6b07a5521795af3fe6e29ef695604b57f656567f9f166aaca4c"} Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.717913 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.781839 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.842139 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.943889 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.965610 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 17:31:26 crc kubenswrapper[4688]: I0930 17:31:26.972924 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z4ppk"] Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.004495 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.013970 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-m7cfk"] Sep 30 17:31:27 crc kubenswrapper[4688]: E0930 17:31:27.051325 4688 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Sep 30 17:31:27 crc kubenswrapper[4688]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/04001a2f-4bd6-4432-bbe0-e21db51a90df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 17:31:27 crc kubenswrapper[4688]: > podSandboxID="25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0" Sep 30 17:31:27 crc kubenswrapper[4688]: E0930 17:31:27.051546 4688 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 30 17:31:27 crc kubenswrapper[4688]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6b5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xzmjh_openstack(04001a2f-4bd6-4432-bbe0-e21db51a90df): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/04001a2f-4bd6-4432-bbe0-e21db51a90df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 17:31:27 crc kubenswrapper[4688]: > logger="UnhandledError" Sep 30 17:31:27 crc kubenswrapper[4688]: E0930 17:31:27.053155 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/04001a2f-4bd6-4432-bbe0-e21db51a90df/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.463780 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df8c144-75e4-40e2-a33a-decc9ce63c67" path="/var/lib/kubelet/pods/1df8c144-75e4-40e2-a33a-decc9ce63c67/volumes" Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.471032 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ae9a11-6506-4a11-9e40-4750d30cccd7" path="/var/lib/kubelet/pods/84ae9a11-6506-4a11-9e40-4750d30cccd7/volumes" Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.735482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38","Type":"ContainerStarted","Data":"df9b32a7133726b04d33ab93af49451299b372a4fd28375e6d576d39edc6fad5"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.738946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"47da376c-d470-460a-90a1-00e696c83821","Type":"ContainerStarted","Data":"76cd419f8fd60b02387bd5b8d550935dfd61b61a710139075cc3caca90c39810"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.741588 4688 generic.go:334] "Generic (PLEG): container finished" podID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerID="9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd" exitCode=0 Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.741672 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" event={"ID":"19816ac9-c1fd-443a-b05c-bd6a75a19f06","Type":"ContainerDied","Data":"9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.741718 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" event={"ID":"19816ac9-c1fd-443a-b05c-bd6a75a19f06","Type":"ContainerStarted","Data":"9ca6c87167a687149e7561c9bcd664e16e92788d041d28c52316b682c0de6490"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.745426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1988f233-6da1-4c36-acbc-33f7dbaab1b0","Type":"ContainerStarted","Data":"a4d0972ac05ed57fb959c66ca75ba8a90563c5c264da613beeed0a96a96e225c"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.747192 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3","Type":"ContainerStarted","Data":"f0d26a4ca0cc35ef51cd390edc9346d5d58fd7bfd3372902092e6ff23d4dd53d"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.749366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44203c2b-b94c-41a5-908c-a79aa2790bfc","Type":"ContainerStarted","Data":"3c6076aad17c83a5b9c6ad8a119eaaccb88de653a20d576c43b656952308ca93"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.752297 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de","Type":"ContainerStarted","Data":"85fb0c1356725fa2330125dc9c13aea77426c90da1c686e39e3ab86a1d46a943"} Sep 30 17:31:27 crc kubenswrapper[4688]: I0930 17:31:27.815778 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dsjj5"] Sep 30 17:31:27 crc kubenswrapper[4688]: W0930 17:31:27.845093 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4a05b21_e8c9_45f9_bf12_56c74c0672fd.slice/crio-2bd9a6eefd42c316849436593d2f113d21d08614808c34c8fd0e70918124f111 WatchSource:0}: Error finding container 2bd9a6eefd42c316849436593d2f113d21d08614808c34c8fd0e70918124f111: Status 404 returned error can't find the container with id 2bd9a6eefd42c316849436593d2f113d21d08614808c34c8fd0e70918124f111 Sep 30 17:31:28 crc kubenswrapper[4688]: I0930 17:31:28.760270 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dsjj5" event={"ID":"a4a05b21-e8c9-45f9-bf12-56c74c0672fd","Type":"ContainerStarted","Data":"2bd9a6eefd42c316849436593d2f113d21d08614808c34c8fd0e70918124f111"} Sep 30 17:31:28 crc kubenswrapper[4688]: I0930 17:31:28.763683 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" event={"ID":"04001a2f-4bd6-4432-bbe0-e21db51a90df","Type":"ContainerStarted","Data":"fb7b5ac89e14c58e636fab6ca4e8bfb91f69bc3ccd23d6d7d54a8c6b8472b2e1"} Sep 30 17:31:28 crc kubenswrapper[4688]: I0930 17:31:28.763925 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:28 crc kubenswrapper[4688]: I0930 17:31:28.766562 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" event={"ID":"19816ac9-c1fd-443a-b05c-bd6a75a19f06","Type":"ContainerStarted","Data":"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda"} Sep 30 17:31:28 crc kubenswrapper[4688]: I0930 17:31:28.788441 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" podStartSLOduration=18.270477489 podStartE2EDuration="18.788415277s" podCreationTimestamp="2025-09-30 17:31:10 +0000 UTC" firstStartedPulling="2025-09-30 17:31:25.526768483 +0000 UTC m=+942.822206041" lastFinishedPulling="2025-09-30 17:31:26.044706261 +0000 UTC m=+943.340143829" observedRunningTime="2025-09-30 17:31:28.781156669 +0000 UTC m=+946.076594237" watchObservedRunningTime="2025-09-30 17:31:28.788415277 +0000 UTC m=+946.083852825" Sep 30 17:31:30 crc kubenswrapper[4688]: I0930 17:31:29.794624 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" podStartSLOduration=18.794593715 podStartE2EDuration="18.794593715s" podCreationTimestamp="2025-09-30 17:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:31:29.792251824 +0000 UTC m=+947.087689382" watchObservedRunningTime="2025-09-30 17:31:29.794593715 +0000 UTC m=+947.090031273" Sep 30 17:31:31 crc kubenswrapper[4688]: I0930 17:31:31.384066 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:36 crc kubenswrapper[4688]: I0930 17:31:36.029820 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:36 crc kubenswrapper[4688]: I0930 17:31:36.385173 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:36 crc kubenswrapper[4688]: I0930 17:31:36.439883 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:36 crc kubenswrapper[4688]: I0930 17:31:36.830349 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="dnsmasq-dns" containerID="cri-o://fb7b5ac89e14c58e636fab6ca4e8bfb91f69bc3ccd23d6d7d54a8c6b8472b2e1" gracePeriod=10 Sep 30 17:31:37 crc kubenswrapper[4688]: I0930 17:31:37.840281 4688 generic.go:334] "Generic (PLEG): container finished" podID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerID="fb7b5ac89e14c58e636fab6ca4e8bfb91f69bc3ccd23d6d7d54a8c6b8472b2e1" exitCode=0 Sep 30 17:31:37 crc kubenswrapper[4688]: I0930 17:31:37.840463 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" event={"ID":"04001a2f-4bd6-4432-bbe0-e21db51a90df","Type":"ContainerDied","Data":"fb7b5ac89e14c58e636fab6ca4e8bfb91f69bc3ccd23d6d7d54a8c6b8472b2e1"} Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.323258 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.481121 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config\") pod \"04001a2f-4bd6-4432-bbe0-e21db51a90df\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.481354 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6b5x\" (UniqueName: \"kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x\") pod \"04001a2f-4bd6-4432-bbe0-e21db51a90df\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.481718 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc\") pod \"04001a2f-4bd6-4432-bbe0-e21db51a90df\" (UID: \"04001a2f-4bd6-4432-bbe0-e21db51a90df\") " Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.485018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x" (OuterVolumeSpecName: "kube-api-access-x6b5x") pod "04001a2f-4bd6-4432-bbe0-e21db51a90df" (UID: "04001a2f-4bd6-4432-bbe0-e21db51a90df"). InnerVolumeSpecName "kube-api-access-x6b5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.533523 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config" (OuterVolumeSpecName: "config") pod "04001a2f-4bd6-4432-bbe0-e21db51a90df" (UID: "04001a2f-4bd6-4432-bbe0-e21db51a90df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.539234 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04001a2f-4bd6-4432-bbe0-e21db51a90df" (UID: "04001a2f-4bd6-4432-bbe0-e21db51a90df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.584106 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.584142 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6b5x\" (UniqueName: \"kubernetes.io/projected/04001a2f-4bd6-4432-bbe0-e21db51a90df-kube-api-access-x6b5x\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.584153 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04001a2f-4bd6-4432-bbe0-e21db51a90df-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.852674 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" event={"ID":"04001a2f-4bd6-4432-bbe0-e21db51a90df","Type":"ContainerDied","Data":"25726a09e73fef5cf4b95525cf4cbc9fa2bb3032ac6f3f4e3026a887d48719b0"} Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.852733 4688 scope.go:117] "RemoveContainer" containerID="fb7b5ac89e14c58e636fab6ca4e8bfb91f69bc3ccd23d6d7d54a8c6b8472b2e1" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.852871 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xzmjh" Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.938665 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:38 crc kubenswrapper[4688]: I0930 17:31:38.951300 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xzmjh"] Sep 30 17:31:39 crc kubenswrapper[4688]: I0930 17:31:39.353767 4688 scope.go:117] "RemoveContainer" containerID="c4e2a56030e394ff769ac93b7c123a2c6954310c786ae01ad6c306411285a3da" Sep 30 17:31:39 crc kubenswrapper[4688]: I0930 17:31:39.442881 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" path="/var/lib/kubelet/pods/04001a2f-4bd6-4432-bbe0-e21db51a90df/volumes" Sep 30 17:31:39 crc kubenswrapper[4688]: I0930 17:31:39.863401 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dsjj5" event={"ID":"a4a05b21-e8c9-45f9-bf12-56c74c0672fd","Type":"ContainerStarted","Data":"6f8bd00954d602c7692d14ccbdb79c55f5d1779975cc067b9a40b42d86686fce"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.877379 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1988f233-6da1-4c36-acbc-33f7dbaab1b0","Type":"ContainerStarted","Data":"c6b0179cbdaef03f960bf481960d715e87d1d5653bf669ea2421f85a5f9ef7f5"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.877823 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.879454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3","Type":"ContainerStarted","Data":"d2fd4a2f3bf3a07ba713bde55401da324d1aadafad23a499e98a1d83b3e8711c"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.879689 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.883313 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44203c2b-b94c-41a5-908c-a79aa2790bfc","Type":"ContainerStarted","Data":"5d1a1603219e6be933539cc46752c47542bbdd0e8c1656d55207da8b8bc6fc72"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.885188 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerStarted","Data":"c082befd234dc0901188426171c3b48a4b3c235fbbcf5d94c96dffbfe15ac9d6"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.891923 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38","Type":"ContainerStarted","Data":"e46d55d95fa5f47c556afea806ab97099950a70054ad9c3e1c05088e88cdcb05"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.894061 4688 generic.go:334] "Generic (PLEG): container finished" podID="a4a05b21-e8c9-45f9-bf12-56c74c0672fd" containerID="6f8bd00954d602c7692d14ccbdb79c55f5d1779975cc067b9a40b42d86686fce" exitCode=0 Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.894124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dsjj5" event={"ID":"a4a05b21-e8c9-45f9-bf12-56c74c0672fd","Type":"ContainerDied","Data":"6f8bd00954d602c7692d14ccbdb79c55f5d1779975cc067b9a40b42d86686fce"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.896805 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerStarted","Data":"3c6b763ad6b225b3b081ca605beead7987f546243dd6c3b45b77255cc612473d"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.902119 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de","Type":"ContainerStarted","Data":"be0bbf6655236f2bf23df46b365ebed9e6954f1c6e84dc797b3f8842eef6f214"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.903283 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.746955746 podStartE2EDuration="25.903261656s" podCreationTimestamp="2025-09-30 17:31:15 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.684278331 +0000 UTC m=+943.979715889" lastFinishedPulling="2025-09-30 17:31:37.840584231 +0000 UTC m=+955.136021799" observedRunningTime="2025-09-30 17:31:40.897782644 +0000 UTC m=+958.193220202" watchObservedRunningTime="2025-09-30 17:31:40.903261656 +0000 UTC m=+958.198699214" Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.908966 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-v6d9x" event={"ID":"a5a55636-8e3a-49bf-8c31-2980f614fd65","Type":"ContainerStarted","Data":"1336238b5afc1ba8bd854b879893478e52bdb6aa234025cd41759feff1857883"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.909145 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-v6d9x" Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.911690 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"47da376c-d470-460a-90a1-00e696c83821","Type":"ContainerStarted","Data":"3e86874b7bd86538ea2a361ee6049b3196a174dea731f9004a29aa4aef3aa47a"} Sep 30 17:31:40 crc kubenswrapper[4688]: I0930 17:31:40.958278 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.011612904 podStartE2EDuration="23.958257546s" podCreationTimestamp="2025-09-30 17:31:17 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.688970242 +0000 UTC m=+943.984407800" lastFinishedPulling="2025-09-30 17:31:39.635614884 +0000 UTC m=+956.931052442" observedRunningTime="2025-09-30 17:31:40.955231908 +0000 UTC m=+958.250669476" watchObservedRunningTime="2025-09-30 17:31:40.958257546 +0000 UTC m=+958.253695104" Sep 30 17:31:41 crc kubenswrapper[4688]: I0930 17:31:41.114653 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-v6d9x" podStartSLOduration=9.371918229 podStartE2EDuration="21.114624345s" podCreationTimestamp="2025-09-30 17:31:20 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.661266907 +0000 UTC m=+943.956704455" lastFinishedPulling="2025-09-30 17:31:38.403973013 +0000 UTC m=+955.699410571" observedRunningTime="2025-09-30 17:31:41.110983771 +0000 UTC m=+958.406421329" watchObservedRunningTime="2025-09-30 17:31:41.114624345 +0000 UTC m=+958.410061903" Sep 30 17:31:41 crc kubenswrapper[4688]: I0930 17:31:41.924516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dsjj5" event={"ID":"a4a05b21-e8c9-45f9-bf12-56c74c0672fd","Type":"ContainerStarted","Data":"9ac9cc5110f55dbd15b1bcc9d4d124993347162d232a2e173e2c2df032eb7fcb"} Sep 30 17:31:41 crc kubenswrapper[4688]: I0930 17:31:41.924988 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dsjj5" event={"ID":"a4a05b21-e8c9-45f9-bf12-56c74c0672fd","Type":"ContainerStarted","Data":"d2f51175dcfb72bb70c2b2e2755d00d391129d4f4b8b924ede0d817122d0ff81"} Sep 30 17:31:41 crc kubenswrapper[4688]: I0930 17:31:41.948537 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dsjj5" podStartSLOduration=11.563033561 podStartE2EDuration="21.948518893s" podCreationTimestamp="2025-09-30 17:31:20 +0000 UTC" firstStartedPulling="2025-09-30 17:31:27.850784979 +0000 UTC m=+945.146222547" lastFinishedPulling="2025-09-30 17:31:38.236270321 +0000 UTC m=+955.531707879" observedRunningTime="2025-09-30 17:31:41.945114515 +0000 UTC m=+959.240552073" watchObservedRunningTime="2025-09-30 17:31:41.948518893 +0000 UTC m=+959.243956441" Sep 30 17:31:42 crc kubenswrapper[4688]: I0930 17:31:42.932979 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:42 crc kubenswrapper[4688]: I0930 17:31:42.933061 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:31:43 crc kubenswrapper[4688]: I0930 17:31:43.942385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de","Type":"ContainerStarted","Data":"12bd80aa58273e91e1cb1dbae1cdc06afb3c346b5fb3e0d446845a39ff66724b"} Sep 30 17:31:43 crc kubenswrapper[4688]: I0930 17:31:43.945271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"44203c2b-b94c-41a5-908c-a79aa2790bfc","Type":"ContainerStarted","Data":"ca28a255d64bff3c7103ec8c0880be4eafdc7b9699c682ad335d2c0f381078b9"} Sep 30 17:31:43 crc kubenswrapper[4688]: I0930 17:31:43.967438 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.164775932 podStartE2EDuration="23.967410398s" podCreationTimestamp="2025-09-30 17:31:20 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.787073396 +0000 UTC m=+944.082510954" lastFinishedPulling="2025-09-30 17:31:43.589707862 +0000 UTC m=+960.885145420" observedRunningTime="2025-09-30 17:31:43.961903236 +0000 UTC m=+961.257340804" watchObservedRunningTime="2025-09-30 17:31:43.967410398 +0000 UTC m=+961.262847956" Sep 30 17:31:43 crc kubenswrapper[4688]: I0930 17:31:43.987518 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.341563068 podStartE2EDuration="20.987495457s" podCreationTimestamp="2025-09-30 17:31:23 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.947487559 +0000 UTC m=+944.242925117" lastFinishedPulling="2025-09-30 17:31:43.593419948 +0000 UTC m=+960.888857506" observedRunningTime="2025-09-30 17:31:43.985434014 +0000 UTC m=+961.280871592" watchObservedRunningTime="2025-09-30 17:31:43.987495457 +0000 UTC m=+961.282933025" Sep 30 17:31:44 crc kubenswrapper[4688]: I0930 17:31:44.869631 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:44 crc kubenswrapper[4688]: I0930 17:31:44.958898 4688 generic.go:334] "Generic (PLEG): container finished" podID="16d86c38-a2f6-4f33-8a3f-d66ff12bdb38" containerID="e46d55d95fa5f47c556afea806ab97099950a70054ad9c3e1c05088e88cdcb05" exitCode=0 Sep 30 17:31:44 crc kubenswrapper[4688]: I0930 17:31:44.959000 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38","Type":"ContainerDied","Data":"e46d55d95fa5f47c556afea806ab97099950a70054ad9c3e1c05088e88cdcb05"} Sep 30 17:31:44 crc kubenswrapper[4688]: I0930 17:31:44.962666 4688 generic.go:334] "Generic (PLEG): container finished" podID="47da376c-d470-460a-90a1-00e696c83821" containerID="3e86874b7bd86538ea2a361ee6049b3196a174dea731f9004a29aa4aef3aa47a" exitCode=0 Sep 30 17:31:44 crc kubenswrapper[4688]: I0930 17:31:44.962730 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"47da376c-d470-460a-90a1-00e696c83821","Type":"ContainerDied","Data":"3e86874b7bd86538ea2a361ee6049b3196a174dea731f9004a29aa4aef3aa47a"} Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.457021 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.511015 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.727176 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.869820 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.925054 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.973205 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"16d86c38-a2f6-4f33-8a3f-d66ff12bdb38","Type":"ContainerStarted","Data":"ad2724b9cc23553920d74fc87ed56aeef87f467e6e4cdbae47deb619dcfa4fa1"} Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.976036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"47da376c-d470-460a-90a1-00e696c83821","Type":"ContainerStarted","Data":"440de7c94656e9d7b2bd812327561bc84d8516e9d6a136e4a3d0d1d37ea79527"} Sep 30 17:31:45 crc kubenswrapper[4688]: I0930 17:31:45.976830 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.002744 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.44703717 podStartE2EDuration="33.002711246s" podCreationTimestamp="2025-09-30 17:31:13 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.849162489 +0000 UTC m=+944.144600047" lastFinishedPulling="2025-09-30 17:31:38.404836575 +0000 UTC m=+955.700274123" observedRunningTime="2025-09-30 17:31:45.999126334 +0000 UTC m=+963.294563912" watchObservedRunningTime="2025-09-30 17:31:46.002711246 +0000 UTC m=+963.298148814" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.017371 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.024386 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.741313432 podStartE2EDuration="33.024366316s" podCreationTimestamp="2025-09-30 17:31:13 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.705976241 +0000 UTC m=+944.001413799" lastFinishedPulling="2025-09-30 17:31:37.989029125 +0000 UTC m=+955.284466683" observedRunningTime="2025-09-30 17:31:46.023137424 +0000 UTC m=+963.318574982" watchObservedRunningTime="2025-09-30 17:31:46.024366316 +0000 UTC m=+963.319803874" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.026026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.302015 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:46 crc kubenswrapper[4688]: E0930 17:31:46.302487 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="dnsmasq-dns" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.302515 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="dnsmasq-dns" Sep 30 17:31:46 crc kubenswrapper[4688]: E0930 17:31:46.302535 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="init" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.302543 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="init" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.302798 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="04001a2f-4bd6-4432-bbe0-e21db51a90df" containerName="dnsmasq-dns" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.303967 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.306814 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.362445 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.366890 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.367233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmvx\" (UniqueName: \"kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.367332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.367522 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.464455 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-s67vw"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.468436 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.469727 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.469867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.469946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmvx\" (UniqueName: \"kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.469975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.471479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.472200 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.472898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.483500 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.526776 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s67vw"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.550591 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmvx\" (UniqueName: \"kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx\") pod \"dnsmasq-dns-7fd796d7df-pp4c7\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.571871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd9b4\" (UniqueName: \"kubernetes.io/projected/263d16ad-0943-47a4-99f5-4b56dd223472-kube-api-access-bd9b4\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.571965 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-combined-ca-bundle\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.572035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263d16ad-0943-47a4-99f5-4b56dd223472-config\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.572074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovn-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.572095 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.572192 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovs-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.623068 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673430 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovn-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673775 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673846 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovs-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673872 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd9b4\" (UniqueName: \"kubernetes.io/projected/263d16ad-0943-47a4-99f5-4b56dd223472-kube-api-access-bd9b4\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673910 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-combined-ca-bundle\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.673952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263d16ad-0943-47a4-99f5-4b56dd223472-config\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.674655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263d16ad-0943-47a4-99f5-4b56dd223472-config\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.674909 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovn-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.675018 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/263d16ad-0943-47a4-99f5-4b56dd223472-ovs-rundir\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.682646 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.688644 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263d16ad-0943-47a4-99f5-4b56dd223472-combined-ca-bundle\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.689370 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.696939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.700306 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2jr5v" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.700505 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.701689 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.719389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd9b4\" (UniqueName: \"kubernetes.io/projected/263d16ad-0943-47a4-99f5-4b56dd223472-kube-api-access-bd9b4\") pod \"ovn-controller-metrics-s67vw\" (UID: \"263d16ad-0943-47a4-99f5-4b56dd223472\") " pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.720388 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.730077 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.756399 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.769845 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799503 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-config\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799728 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n722n\" (UniqueName: \"kubernetes.io/projected/d9a63408-631b-4f20-a556-39bb5835e71e-kube-api-access-n722n\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.799766 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-scripts\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.800104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.808198 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.809121 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s67vw" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.810703 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.850862 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.902352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.907505 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.910806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.910902 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911172 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911297 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-config\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911402 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911519 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqfq\" (UniqueName: \"kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911563 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n722n\" (UniqueName: \"kubernetes.io/projected/d9a63408-631b-4f20-a556-39bb5835e71e-kube-api-access-n722n\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911665 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-scripts\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911732 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911774 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.911894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.912555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-config\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.913885 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9a63408-631b-4f20-a556-39bb5835e71e-scripts\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.915319 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.919658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a63408-631b-4f20-a556-39bb5835e71e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:46 crc kubenswrapper[4688]: I0930 17:31:46.961818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n722n\" (UniqueName: \"kubernetes.io/projected/d9a63408-631b-4f20-a556-39bb5835e71e-kube-api-access-n722n\") pod \"ovn-northd-0\" (UID: \"d9a63408-631b-4f20-a556-39bb5835e71e\") " pod="openstack/ovn-northd-0" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.014188 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.014273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.014326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqfq\" (UniqueName: \"kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.014388 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.014446 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.015675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.016408 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.017390 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.017592 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.037825 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqfq\" (UniqueName: \"kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq\") pod \"dnsmasq-dns-86db49b7ff-vd9vf\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.097482 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.140071 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.233288 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.328163 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s67vw"] Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.587386 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 17:31:47 crc kubenswrapper[4688]: W0930 17:31:47.698491 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b6af46b_3413_4911_979e_52aa9f74d151.slice/crio-80259aaf1bab388fc2997bf53a21f3dbb384695f1f157f8733a9844d29ccbbd4 WatchSource:0}: Error finding container 80259aaf1bab388fc2997bf53a21f3dbb384695f1f157f8733a9844d29ccbbd4: Status 404 returned error can't find the container with id 80259aaf1bab388fc2997bf53a21f3dbb384695f1f157f8733a9844d29ccbbd4 Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.698820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.827035 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.867864 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.920703 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.922715 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:47 crc kubenswrapper[4688]: I0930 17:31:47.934216 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.001170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9a63408-631b-4f20-a556-39bb5835e71e","Type":"ContainerStarted","Data":"c6a40614f1930009c8fdbb0e10a899f17717b7156e99f597ee6ced34ae52c814"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.023675 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" event={"ID":"9b6af46b-3413-4911-979e-52aa9f74d151","Type":"ContainerStarted","Data":"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.023751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" event={"ID":"9b6af46b-3413-4911-979e-52aa9f74d151","Type":"ContainerStarted","Data":"80259aaf1bab388fc2997bf53a21f3dbb384695f1f157f8733a9844d29ccbbd4"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.029684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s67vw" event={"ID":"263d16ad-0943-47a4-99f5-4b56dd223472","Type":"ContainerStarted","Data":"32c8b921e8788b8476ed2c0ad016c2d0b306c0470197dec5980aa587943709b3"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.029737 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s67vw" event={"ID":"263d16ad-0943-47a4-99f5-4b56dd223472","Type":"ContainerStarted","Data":"b50edd10dd901892f6557bad2eef15cfdfc8d6171e89eefc9c67fce01de1a8d0"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.033348 4688 generic.go:334] "Generic (PLEG): container finished" podID="22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" containerID="65a277b4193710c9aac630a76c1132b24b7b5288b803bce455fa3e170e7f17c2" exitCode=0 Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.033618 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" event={"ID":"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4","Type":"ContainerDied","Data":"65a277b4193710c9aac630a76c1132b24b7b5288b803bce455fa3e170e7f17c2"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.033674 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" event={"ID":"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4","Type":"ContainerStarted","Data":"fec61702e695e2f849d24c1da066318dfecd819d3a5fe4d9c68b117413cbf938"} Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.037538 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.037631 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxt8\" (UniqueName: \"kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.037658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.037696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.037752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.118798 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-s67vw" podStartSLOduration=2.118770882 podStartE2EDuration="2.118770882s" podCreationTimestamp="2025-09-30 17:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:31:48.113527516 +0000 UTC m=+965.408965074" watchObservedRunningTime="2025-09-30 17:31:48.118770882 +0000 UTC m=+965.414208440" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.140731 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.140925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.141064 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxt8\" (UniqueName: \"kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.141105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.141152 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.142248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.142337 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.142900 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.143203 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.162828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxt8\" (UniqueName: \"kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8\") pod \"dnsmasq-dns-698758b865-rclzn\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.249971 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.424813 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.457301 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmvx\" (UniqueName: \"kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx\") pod \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.457457 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc\") pod \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.457510 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config\") pod \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.457663 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb\") pod \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\" (UID: \"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.467638 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx" (OuterVolumeSpecName: "kube-api-access-fnmvx") pod "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" (UID: "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4"). InnerVolumeSpecName "kube-api-access-fnmvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.489826 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config" (OuterVolumeSpecName: "config") pod "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" (UID: "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.490464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" (UID: "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.492080 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.499403 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" (UID: "22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.560413 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcqfq\" (UniqueName: \"kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq\") pod \"9b6af46b-3413-4911-979e-52aa9f74d151\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.560601 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc\") pod \"9b6af46b-3413-4911-979e-52aa9f74d151\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.560681 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb\") pod \"9b6af46b-3413-4911-979e-52aa9f74d151\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.560759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config\") pod \"9b6af46b-3413-4911-979e-52aa9f74d151\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.560932 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb\") pod \"9b6af46b-3413-4911-979e-52aa9f74d151\" (UID: \"9b6af46b-3413-4911-979e-52aa9f74d151\") " Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.561444 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.561471 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmvx\" (UniqueName: \"kubernetes.io/projected/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-kube-api-access-fnmvx\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.561487 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.561500 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.564960 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq" (OuterVolumeSpecName: "kube-api-access-xcqfq") pod "9b6af46b-3413-4911-979e-52aa9f74d151" (UID: "9b6af46b-3413-4911-979e-52aa9f74d151"). InnerVolumeSpecName "kube-api-access-xcqfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.584398 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b6af46b-3413-4911-979e-52aa9f74d151" (UID: "9b6af46b-3413-4911-979e-52aa9f74d151"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.586168 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b6af46b-3413-4911-979e-52aa9f74d151" (UID: "9b6af46b-3413-4911-979e-52aa9f74d151"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.589301 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b6af46b-3413-4911-979e-52aa9f74d151" (UID: "9b6af46b-3413-4911-979e-52aa9f74d151"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.590167 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config" (OuterVolumeSpecName: "config") pod "9b6af46b-3413-4911-979e-52aa9f74d151" (UID: "9b6af46b-3413-4911-979e-52aa9f74d151"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.662897 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.662943 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.662958 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.662971 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcqfq\" (UniqueName: \"kubernetes.io/projected/9b6af46b-3413-4911-979e-52aa9f74d151-kube-api-access-xcqfq\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.662989 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af46b-3413-4911-979e-52aa9f74d151-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:48 crc kubenswrapper[4688]: I0930 17:31:48.741755 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.017797 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.018903 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6af46b-3413-4911-979e-52aa9f74d151" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.018929 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6af46b-3413-4911-979e-52aa9f74d151" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.018953 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.018967 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.019420 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6af46b-3413-4911-979e-52aa9f74d151" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.019523 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" containerName="init" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.059728 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.083779 4688 generic.go:334] "Generic (PLEG): container finished" podID="9b6af46b-3413-4911-979e-52aa9f74d151" containerID="141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936" exitCode=0 Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.084299 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.086327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-pp4c7" event={"ID":"22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4","Type":"ContainerDied","Data":"fec61702e695e2f849d24c1da066318dfecd819d3a5fe4d9c68b117413cbf938"} Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.086387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rclzn" event={"ID":"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce","Type":"ContainerStarted","Data":"459160ec077f31c96b1efc17932da872958ed3686c8274eb35fb4d60ebdaf6f6"} Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.086409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" event={"ID":"9b6af46b-3413-4911-979e-52aa9f74d151","Type":"ContainerDied","Data":"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936"} Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.086426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" event={"ID":"9b6af46b-3413-4911-979e-52aa9f74d151","Type":"ContainerDied","Data":"80259aaf1bab388fc2997bf53a21f3dbb384695f1f157f8733a9844d29ccbbd4"} Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.086450 4688 scope.go:117] "RemoveContainer" containerID="65a277b4193710c9aac630a76c1132b24b7b5288b803bce455fa3e170e7f17c2" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.084430 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vd9vf" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.084656 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.089660 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.089982 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.090509 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-m4sc8" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.093006 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.126546 4688 scope.go:117] "RemoveContainer" containerID="141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.181767 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-lock\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.181837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.182105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.182199 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-cache\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.182322 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxmx\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-kube-api-access-drxmx\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.183659 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.195450 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-pp4c7"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.210489 4688 scope.go:117] "RemoveContainer" containerID="141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936" Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.213032 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936\": container with ID starting with 141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936 not found: ID does not exist" containerID="141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.213092 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936"} err="failed to get container status \"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936\": rpc error: code = NotFound desc = could not find container \"141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936\": container with ID starting with 141d10a8142bd369c7057ec9a17a6b678719475d105f87eb43b3ea32ba7bf936 not found: ID does not exist" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.221939 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.233404 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vd9vf"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284064 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-lock\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284540 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284616 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-cache\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284692 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drxmx\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-kube-api-access-drxmx\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.284760 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.284802 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.284881 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:31:49.784851879 +0000 UTC m=+967.080289437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.284991 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-lock\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.285170 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.285909 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-cache\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.308038 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drxmx\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-kube-api-access-drxmx\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.310169 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.442026 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4" path="/var/lib/kubelet/pods/22b7b36d-cd0f-47e8-8fd5-d2b6371b2bc4/volumes" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.442566 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6af46b-3413-4911-979e-52aa9f74d151" path="/var/lib/kubelet/pods/9b6af46b-3413-4911-979e-52aa9f74d151/volumes" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.625268 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-th7w4"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.627004 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.629458 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.629514 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.630033 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.643441 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-th7w4"] Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.692378 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-th7w4"] Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.693126 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-8f6s7 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-th7w4" podUID="f80c2e47-3151-4c0e-b82f-03ab29f38314" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.693678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.693710 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.693737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f6s7\" (UniqueName: \"kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.693834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.693923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.694235 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.694494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796662 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f6s7\" (UniqueName: \"kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796760 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796866 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.796979 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.797325 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.797365 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: E0930 17:31:49.797431 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:31:50.797406138 +0000 UTC m=+968.092843896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.797878 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.798131 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.798698 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.804092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.810002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.814103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:49 crc kubenswrapper[4688]: I0930 17:31:49.827427 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f6s7\" (UniqueName: \"kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7\") pod \"swift-ring-rebalance-th7w4\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.096775 4688 generic.go:334] "Generic (PLEG): container finished" podID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerID="ee3ad7acb1b2623d7bf727ee3ac1f55f5e42d84d385db1e53d518f5ce1815c71" exitCode=0 Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.096860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rclzn" event={"ID":"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce","Type":"ContainerDied","Data":"ee3ad7acb1b2623d7bf727ee3ac1f55f5e42d84d385db1e53d518f5ce1815c71"} Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.098815 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9a63408-631b-4f20-a556-39bb5835e71e","Type":"ContainerStarted","Data":"a23c41ca5e8ea3062a87a925f36b677d1761a230a396b8f6b8bb788b45ca812a"} Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.098854 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d9a63408-631b-4f20-a556-39bb5835e71e","Type":"ContainerStarted","Data":"a15ee8a07823bf6513c951c9aef02c0fddd2f80df64c59a3de3b9c08d658349c"} Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.099539 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.100700 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.173283 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.766797939 podStartE2EDuration="4.173255046s" podCreationTimestamp="2025-09-30 17:31:46 +0000 UTC" firstStartedPulling="2025-09-30 17:31:47.60068309 +0000 UTC m=+964.896120648" lastFinishedPulling="2025-09-30 17:31:49.007140197 +0000 UTC m=+966.302577755" observedRunningTime="2025-09-30 17:31:50.167107187 +0000 UTC m=+967.462544765" watchObservedRunningTime="2025-09-30 17:31:50.173255046 +0000 UTC m=+967.468692604" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.208483 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304478 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304532 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304621 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304647 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304674 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304714 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f6s7\" (UniqueName: \"kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.304762 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices\") pod \"f80c2e47-3151-4c0e-b82f-03ab29f38314\" (UID: \"f80c2e47-3151-4c0e-b82f-03ab29f38314\") " Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.305261 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.305460 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts" (OuterVolumeSpecName: "scripts") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.305745 4688 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f80c2e47-3151-4c0e-b82f-03ab29f38314-etc-swift\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.305764 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.305822 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.308424 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.309805 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7" (OuterVolumeSpecName: "kube-api-access-8f6s7") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "kube-api-access-8f6s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.311498 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.313355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f80c2e47-3151-4c0e-b82f-03ab29f38314" (UID: "f80c2e47-3151-4c0e-b82f-03ab29f38314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.406995 4688 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f80c2e47-3151-4c0e-b82f-03ab29f38314-ring-data-devices\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.407391 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.407401 4688 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-dispersionconf\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.407412 4688 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f80c2e47-3151-4c0e-b82f-03ab29f38314-swiftconf\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.407422 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f6s7\" (UniqueName: \"kubernetes.io/projected/f80c2e47-3151-4c0e-b82f-03ab29f38314-kube-api-access-8f6s7\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:50 crc kubenswrapper[4688]: I0930 17:31:50.814907 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:50 crc kubenswrapper[4688]: E0930 17:31:50.815116 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:31:50 crc kubenswrapper[4688]: E0930 17:31:50.815310 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:31:50 crc kubenswrapper[4688]: E0930 17:31:50.815377 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:31:52.815356631 +0000 UTC m=+970.110794189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.110632 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rclzn" event={"ID":"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce","Type":"ContainerStarted","Data":"09528c74c0d13333ec0fad99d4b54d8c9cdda2eca0cc49b1baf72f01742b5e98"} Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.110830 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th7w4" Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.147930 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-rclzn" podStartSLOduration=4.14790634 podStartE2EDuration="4.14790634s" podCreationTimestamp="2025-09-30 17:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:31:51.141754621 +0000 UTC m=+968.437192199" watchObservedRunningTime="2025-09-30 17:31:51.14790634 +0000 UTC m=+968.443343898" Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.179739 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-th7w4"] Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.187868 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-th7w4"] Sep 30 17:31:51 crc kubenswrapper[4688]: I0930 17:31:51.441361 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80c2e47-3151-4c0e-b82f-03ab29f38314" path="/var/lib/kubelet/pods/f80c2e47-3151-4c0e-b82f-03ab29f38314/volumes" Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.118850 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.546030 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.546350 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.546397 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.546912 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.546965 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556" gracePeriod=600 Sep 30 17:31:52 crc kubenswrapper[4688]: I0930 17:31:52.863070 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:52 crc kubenswrapper[4688]: E0930 17:31:52.863334 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:31:52 crc kubenswrapper[4688]: E0930 17:31:52.863377 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:31:52 crc kubenswrapper[4688]: E0930 17:31:52.863487 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:31:56.86345088 +0000 UTC m=+974.158888448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:31:53 crc kubenswrapper[4688]: I0930 17:31:53.132509 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556" exitCode=0 Sep 30 17:31:53 crc kubenswrapper[4688]: I0930 17:31:53.133317 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556"} Sep 30 17:31:53 crc kubenswrapper[4688]: I0930 17:31:53.133359 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d"} Sep 30 17:31:53 crc kubenswrapper[4688]: I0930 17:31:53.133381 4688 scope.go:117] "RemoveContainer" containerID="5020dee8e64f5f30e7a64368e8118bacf47eeb6e8b507a0b15d70edc0bd6cd00" Sep 30 17:31:54 crc kubenswrapper[4688]: I0930 17:31:54.931237 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Sep 30 17:31:54 crc kubenswrapper[4688]: I0930 17:31:54.931759 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Sep 30 17:31:54 crc kubenswrapper[4688]: I0930 17:31:54.981037 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.199648 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.347835 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.348255 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.398971 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.734985 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8kn9n"] Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.736545 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.749406 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8kn9n"] Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.889276 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hw9h7"] Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.890710 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.900610 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hw9h7"] Sep 30 17:31:55 crc kubenswrapper[4688]: I0930 17:31:55.932179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5r9\" (UniqueName: \"kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9\") pod \"placement-db-create-8kn9n\" (UID: \"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e\") " pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.034032 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5r9\" (UniqueName: \"kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9\") pod \"placement-db-create-8kn9n\" (UID: \"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e\") " pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.034738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4fq\" (UniqueName: \"kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq\") pod \"glance-db-create-hw9h7\" (UID: \"907d92af-aa64-4815-94d7-f3afd616bdd1\") " pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.061364 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5r9\" (UniqueName: \"kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9\") pod \"placement-db-create-8kn9n\" (UID: \"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e\") " pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.136788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4fq\" (UniqueName: \"kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq\") pod \"glance-db-create-hw9h7\" (UID: \"907d92af-aa64-4815-94d7-f3afd616bdd1\") " pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.157121 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4fq\" (UniqueName: \"kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq\") pod \"glance-db-create-hw9h7\" (UID: \"907d92af-aa64-4815-94d7-f3afd616bdd1\") " pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.209989 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.212438 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.361526 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.715630 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8kn9n"] Sep 30 17:31:56 crc kubenswrapper[4688]: W0930 17:31:56.722595 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52a6b9bb_4d9c_4166_8d08_58d7428a1d0e.slice/crio-f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5 WatchSource:0}: Error finding container f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5: Status 404 returned error can't find the container with id f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5 Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.745732 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hw9h7"] Sep 30 17:31:56 crc kubenswrapper[4688]: W0930 17:31:56.758537 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907d92af_aa64_4815_94d7_f3afd616bdd1.slice/crio-7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84 WatchSource:0}: Error finding container 7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84: Status 404 returned error can't find the container with id 7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84 Sep 30 17:31:56 crc kubenswrapper[4688]: I0930 17:31:56.957929 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:31:56 crc kubenswrapper[4688]: E0930 17:31:56.958119 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:31:56 crc kubenswrapper[4688]: E0930 17:31:56.958153 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:31:56 crc kubenswrapper[4688]: E0930 17:31:56.958250 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:32:04.958227928 +0000 UTC m=+982.253665486 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:31:57 crc kubenswrapper[4688]: E0930 17:31:57.052767 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52a6b9bb_4d9c_4166_8d08_58d7428a1d0e.slice/crio-conmon-f7ddba0df00dd0d61b9a410e6930a1f7c1413adf8b6d56a7c0a4f536e777e268.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.171098 4688 generic.go:334] "Generic (PLEG): container finished" podID="907d92af-aa64-4815-94d7-f3afd616bdd1" containerID="35ff587d4488113a19765dffd380458bf2d1c3247004196e15fdc90fb0eec650" exitCode=0 Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.171186 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hw9h7" event={"ID":"907d92af-aa64-4815-94d7-f3afd616bdd1","Type":"ContainerDied","Data":"35ff587d4488113a19765dffd380458bf2d1c3247004196e15fdc90fb0eec650"} Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.171220 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hw9h7" event={"ID":"907d92af-aa64-4815-94d7-f3afd616bdd1","Type":"ContainerStarted","Data":"7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84"} Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.173382 4688 generic.go:334] "Generic (PLEG): container finished" podID="52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" containerID="f7ddba0df00dd0d61b9a410e6930a1f7c1413adf8b6d56a7c0a4f536e777e268" exitCode=0 Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.173522 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8kn9n" event={"ID":"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e","Type":"ContainerDied","Data":"f7ddba0df00dd0d61b9a410e6930a1f7c1413adf8b6d56a7c0a4f536e777e268"} Sep 30 17:31:57 crc kubenswrapper[4688]: I0930 17:31:57.173641 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8kn9n" event={"ID":"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e","Type":"ContainerStarted","Data":"f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5"} Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.252797 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.350502 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.350835 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="dnsmasq-dns" containerID="cri-o://cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda" gracePeriod=10 Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.563082 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.678811 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.691428 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t4fq\" (UniqueName: \"kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq\") pod \"907d92af-aa64-4815-94d7-f3afd616bdd1\" (UID: \"907d92af-aa64-4815-94d7-f3afd616bdd1\") " Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.723098 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq" (OuterVolumeSpecName: "kube-api-access-2t4fq") pod "907d92af-aa64-4815-94d7-f3afd616bdd1" (UID: "907d92af-aa64-4815-94d7-f3afd616bdd1"). InnerVolumeSpecName "kube-api-access-2t4fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.793560 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5r9\" (UniqueName: \"kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9\") pod \"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e\" (UID: \"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e\") " Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.794256 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t4fq\" (UniqueName: \"kubernetes.io/projected/907d92af-aa64-4815-94d7-f3afd616bdd1-kube-api-access-2t4fq\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.802427 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9" (OuterVolumeSpecName: "kube-api-access-8f5r9") pod "52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" (UID: "52a6b9bb-4d9c-4166-8d08-58d7428a1d0e"). InnerVolumeSpecName "kube-api-access-8f5r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.828521 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.899090 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc\") pod \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.899199 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config\") pod \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.899401 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsk2c\" (UniqueName: \"kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c\") pod \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\" (UID: \"19816ac9-c1fd-443a-b05c-bd6a75a19f06\") " Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.899862 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5r9\" (UniqueName: \"kubernetes.io/projected/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e-kube-api-access-8f5r9\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.904272 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c" (OuterVolumeSpecName: "kube-api-access-wsk2c") pod "19816ac9-c1fd-443a-b05c-bd6a75a19f06" (UID: "19816ac9-c1fd-443a-b05c-bd6a75a19f06"). InnerVolumeSpecName "kube-api-access-wsk2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.944181 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config" (OuterVolumeSpecName: "config") pod "19816ac9-c1fd-443a-b05c-bd6a75a19f06" (UID: "19816ac9-c1fd-443a-b05c-bd6a75a19f06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:58 crc kubenswrapper[4688]: I0930 17:31:58.955496 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19816ac9-c1fd-443a-b05c-bd6a75a19f06" (UID: "19816ac9-c1fd-443a-b05c-bd6a75a19f06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.002206 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.002259 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19816ac9-c1fd-443a-b05c-bd6a75a19f06-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.002274 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsk2c\" (UniqueName: \"kubernetes.io/projected/19816ac9-c1fd-443a-b05c-bd6a75a19f06-kube-api-access-wsk2c\") on node \"crc\" DevicePath \"\"" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.196552 4688 generic.go:334] "Generic (PLEG): container finished" podID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerID="cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda" exitCode=0 Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.196630 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.196653 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" event={"ID":"19816ac9-c1fd-443a-b05c-bd6a75a19f06","Type":"ContainerDied","Data":"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda"} Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.196684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rbzgt" event={"ID":"19816ac9-c1fd-443a-b05c-bd6a75a19f06","Type":"ContainerDied","Data":"9ca6c87167a687149e7561c9bcd664e16e92788d041d28c52316b682c0de6490"} Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.196709 4688 scope.go:117] "RemoveContainer" containerID="cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.199609 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8kn9n" event={"ID":"52a6b9bb-4d9c-4166-8d08-58d7428a1d0e","Type":"ContainerDied","Data":"f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5"} Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.199665 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f724f67a355067c25e5f6c0f05dc2265d3a889fe2f07396996ac6d465d2d16e5" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.199754 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8kn9n" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.201441 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hw9h7" event={"ID":"907d92af-aa64-4815-94d7-f3afd616bdd1","Type":"ContainerDied","Data":"7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84"} Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.201483 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc064a755668f875ea36911f1d13e3dc12f6272626ad949ce663dccf88c0d84" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.201513 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hw9h7" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.226974 4688 scope.go:117] "RemoveContainer" containerID="9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.245952 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.253416 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rbzgt"] Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.258094 4688 scope.go:117] "RemoveContainer" containerID="cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda" Sep 30 17:31:59 crc kubenswrapper[4688]: E0930 17:31:59.258764 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda\": container with ID starting with cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda not found: ID does not exist" containerID="cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.258811 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda"} err="failed to get container status \"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda\": rpc error: code = NotFound desc = could not find container \"cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda\": container with ID starting with cb7dd5d526f4fd3180b04c89cecf5b588a417b617c86353014a6cbed55611eda not found: ID does not exist" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.258846 4688 scope.go:117] "RemoveContainer" containerID="9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd" Sep 30 17:31:59 crc kubenswrapper[4688]: E0930 17:31:59.259085 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd\": container with ID starting with 9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd not found: ID does not exist" containerID="9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.259116 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd"} err="failed to get container status \"9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd\": rpc error: code = NotFound desc = could not find container \"9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd\": container with ID starting with 9df8f217c01ba735cc8cc220f64f6851261e91b080c5259e0cce43ff6ab7e5dd not found: ID does not exist" Sep 30 17:31:59 crc kubenswrapper[4688]: I0930 17:31:59.439755 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" path="/var/lib/kubelet/pods/19816ac9-c1fd-443a-b05c-bd6a75a19f06/volumes" Sep 30 17:32:02 crc kubenswrapper[4688]: I0930 17:32:02.154255 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.020105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.020365 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.020682 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.020779 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:32:21.020751273 +0000 UTC m=+998.316188821 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.333452 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bbn9q"] Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.334289 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="init" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334313 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="init" Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.334327 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334337 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.334360 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="dnsmasq-dns" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334366 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="dnsmasq-dns" Sep 30 17:32:05 crc kubenswrapper[4688]: E0930 17:32:05.334380 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907d92af-aa64-4815-94d7-f3afd616bdd1" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334385 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="907d92af-aa64-4815-94d7-f3afd616bdd1" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334626 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="907d92af-aa64-4815-94d7-f3afd616bdd1" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334646 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" containerName="mariadb-database-create" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.334671 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="19816ac9-c1fd-443a-b05c-bd6a75a19f06" containerName="dnsmasq-dns" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.335441 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.345371 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bbn9q"] Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.428512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzbm\" (UniqueName: \"kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm\") pod \"keystone-db-create-bbn9q\" (UID: \"780ba5d7-2d07-4268-8899-bb4ccfedb41f\") " pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.530570 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzbm\" (UniqueName: \"kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm\") pod \"keystone-db-create-bbn9q\" (UID: \"780ba5d7-2d07-4268-8899-bb4ccfedb41f\") " pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.551961 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzbm\" (UniqueName: \"kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm\") pod \"keystone-db-create-bbn9q\" (UID: \"780ba5d7-2d07-4268-8899-bb4ccfedb41f\") " pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.657728 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.786611 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-05f7-account-create-fj4md"] Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.789018 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.794515 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.800294 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-05f7-account-create-fj4md"] Sep 30 17:32:05 crc kubenswrapper[4688]: I0930 17:32:05.939070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbr5\" (UniqueName: \"kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5\") pod \"placement-05f7-account-create-fj4md\" (UID: \"b944aae0-dfff-452a-8618-1ca22a36ae3a\") " pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.041946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmbr5\" (UniqueName: \"kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5\") pod \"placement-05f7-account-create-fj4md\" (UID: \"b944aae0-dfff-452a-8618-1ca22a36ae3a\") " pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.059966 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6f1e-account-create-47ljs"] Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.061418 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.064211 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.068946 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmbr5\" (UniqueName: \"kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5\") pod \"placement-05f7-account-create-fj4md\" (UID: \"b944aae0-dfff-452a-8618-1ca22a36ae3a\") " pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.072152 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6f1e-account-create-47ljs"] Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.144179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hr5\" (UniqueName: \"kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5\") pod \"glance-6f1e-account-create-47ljs\" (UID: \"ba6fa854-2454-4d36-a070-1315f0d36b09\") " pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.154464 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.195859 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bbn9q"] Sep 30 17:32:06 crc kubenswrapper[4688]: W0930 17:32:06.203392 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780ba5d7_2d07_4268_8899_bb4ccfedb41f.slice/crio-2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f WatchSource:0}: Error finding container 2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f: Status 404 returned error can't find the container with id 2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.245922 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hr5\" (UniqueName: \"kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5\") pod \"glance-6f1e-account-create-47ljs\" (UID: \"ba6fa854-2454-4d36-a070-1315f0d36b09\") " pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.271375 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hr5\" (UniqueName: \"kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5\") pod \"glance-6f1e-account-create-47ljs\" (UID: \"ba6fa854-2454-4d36-a070-1315f0d36b09\") " pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.292474 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bbn9q" event={"ID":"780ba5d7-2d07-4268-8899-bb4ccfedb41f","Type":"ContainerStarted","Data":"2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f"} Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.411470 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.618186 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-05f7-account-create-fj4md"] Sep 30 17:32:06 crc kubenswrapper[4688]: W0930 17:32:06.624380 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb944aae0_dfff_452a_8618_1ca22a36ae3a.slice/crio-181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602 WatchSource:0}: Error finding container 181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602: Status 404 returned error can't find the container with id 181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602 Sep 30 17:32:06 crc kubenswrapper[4688]: I0930 17:32:06.855332 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6f1e-account-create-47ljs"] Sep 30 17:32:06 crc kubenswrapper[4688]: W0930 17:32:06.860304 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba6fa854_2454_4d36_a070_1315f0d36b09.slice/crio-1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132 WatchSource:0}: Error finding container 1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132: Status 404 returned error can't find the container with id 1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132 Sep 30 17:32:07 crc kubenswrapper[4688]: I0930 17:32:07.301933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-05f7-account-create-fj4md" event={"ID":"b944aae0-dfff-452a-8618-1ca22a36ae3a","Type":"ContainerStarted","Data":"181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602"} Sep 30 17:32:07 crc kubenswrapper[4688]: I0930 17:32:07.302993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6f1e-account-create-47ljs" event={"ID":"ba6fa854-2454-4d36-a070-1315f0d36b09","Type":"ContainerStarted","Data":"1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132"} Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.320780 4688 generic.go:334] "Generic (PLEG): container finished" podID="b944aae0-dfff-452a-8618-1ca22a36ae3a" containerID="ad10fccadb31071187f7d46048bbba7baed415c5797e476575aac63e502b7b8f" exitCode=0 Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.320892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-05f7-account-create-fj4md" event={"ID":"b944aae0-dfff-452a-8618-1ca22a36ae3a","Type":"ContainerDied","Data":"ad10fccadb31071187f7d46048bbba7baed415c5797e476575aac63e502b7b8f"} Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.323253 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba6fa854-2454-4d36-a070-1315f0d36b09" containerID="26a976966efd9ee04f54218a5e581a3e73e685535e90debb6decd459dac6fe36" exitCode=0 Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.323321 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6f1e-account-create-47ljs" event={"ID":"ba6fa854-2454-4d36-a070-1315f0d36b09","Type":"ContainerDied","Data":"26a976966efd9ee04f54218a5e581a3e73e685535e90debb6decd459dac6fe36"} Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.325094 4688 generic.go:334] "Generic (PLEG): container finished" podID="780ba5d7-2d07-4268-8899-bb4ccfedb41f" containerID="8c06aa6b1bad2e7a3012189a0f0094508dfea57854fa169130f139db6f658480" exitCode=0 Sep 30 17:32:09 crc kubenswrapper[4688]: I0930 17:32:09.325151 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bbn9q" event={"ID":"780ba5d7-2d07-4268-8899-bb4ccfedb41f","Type":"ContainerDied","Data":"8c06aa6b1bad2e7a3012189a0f0094508dfea57854fa169130f139db6f658480"} Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.339721 4688 generic.go:334] "Generic (PLEG): container finished" podID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerID="c082befd234dc0901188426171c3b48a4b3c235fbbcf5d94c96dffbfe15ac9d6" exitCode=0 Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.339830 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerDied","Data":"c082befd234dc0901188426171c3b48a4b3c235fbbcf5d94c96dffbfe15ac9d6"} Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.702978 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.747611 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.758025 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.856712 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmbr5\" (UniqueName: \"kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5\") pod \"b944aae0-dfff-452a-8618-1ca22a36ae3a\" (UID: \"b944aae0-dfff-452a-8618-1ca22a36ae3a\") " Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.856789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wzbm\" (UniqueName: \"kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm\") pod \"780ba5d7-2d07-4268-8899-bb4ccfedb41f\" (UID: \"780ba5d7-2d07-4268-8899-bb4ccfedb41f\") " Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.857123 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hr5\" (UniqueName: \"kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5\") pod \"ba6fa854-2454-4d36-a070-1315f0d36b09\" (UID: \"ba6fa854-2454-4d36-a070-1315f0d36b09\") " Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.862879 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5" (OuterVolumeSpecName: "kube-api-access-85hr5") pod "ba6fa854-2454-4d36-a070-1315f0d36b09" (UID: "ba6fa854-2454-4d36-a070-1315f0d36b09"). InnerVolumeSpecName "kube-api-access-85hr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.862965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5" (OuterVolumeSpecName: "kube-api-access-mmbr5") pod "b944aae0-dfff-452a-8618-1ca22a36ae3a" (UID: "b944aae0-dfff-452a-8618-1ca22a36ae3a"). InnerVolumeSpecName "kube-api-access-mmbr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.863091 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm" (OuterVolumeSpecName: "kube-api-access-8wzbm") pod "780ba5d7-2d07-4268-8899-bb4ccfedb41f" (UID: "780ba5d7-2d07-4268-8899-bb4ccfedb41f"). InnerVolumeSpecName "kube-api-access-8wzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.959298 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hr5\" (UniqueName: \"kubernetes.io/projected/ba6fa854-2454-4d36-a070-1315f0d36b09-kube-api-access-85hr5\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.959353 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmbr5\" (UniqueName: \"kubernetes.io/projected/b944aae0-dfff-452a-8618-1ca22a36ae3a-kube-api-access-mmbr5\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:10 crc kubenswrapper[4688]: I0930 17:32:10.959372 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wzbm\" (UniqueName: \"kubernetes.io/projected/780ba5d7-2d07-4268-8899-bb4ccfedb41f-kube-api-access-8wzbm\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.107843 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-v6d9x" podUID="a5a55636-8e3a-49bf-8c31-2980f614fd65" containerName="ovn-controller" probeResult="failure" output=< Sep 30 17:32:11 crc kubenswrapper[4688]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Sep 30 17:32:11 crc kubenswrapper[4688]: > Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.137804 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.151111 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dsjj5" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.351897 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerID="3c6b763ad6b225b3b081ca605beead7987f546243dd6c3b45b77255cc612473d" exitCode=0 Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.351970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerDied","Data":"3c6b763ad6b225b3b081ca605beead7987f546243dd6c3b45b77255cc612473d"} Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.356182 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerStarted","Data":"45ec4b1e08c87e09ea5749f48aabeb5a9950badd438ff358ef821f6b233bec3a"} Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.356609 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.358121 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-05f7-account-create-fj4md" event={"ID":"b944aae0-dfff-452a-8618-1ca22a36ae3a","Type":"ContainerDied","Data":"181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602"} Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.358181 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181758dc815ae7bd8fecbb4685e01afd3d4c5818c367d025e199c8bd1193e602" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.358159 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.361030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6f1e-account-create-47ljs" event={"ID":"ba6fa854-2454-4d36-a070-1315f0d36b09","Type":"ContainerDied","Data":"1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132"} Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.361071 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6f1e-account-create-47ljs" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.361098 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0eced287d725d24e2df05c6f148ad6b07e9ac770ca9610c3d21790e3066132" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.365711 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bbn9q" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.365808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bbn9q" event={"ID":"780ba5d7-2d07-4268-8899-bb4ccfedb41f","Type":"ContainerDied","Data":"2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f"} Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.365846 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1b52cf679ed198e4682d09feffcd9654eb17930cee33f60510edeed30f682f" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.382670 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-v6d9x-config-6gjc7"] Sep 30 17:32:11 crc kubenswrapper[4688]: E0930 17:32:11.383289 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ba5d7-2d07-4268-8899-bb4ccfedb41f" containerName="mariadb-database-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383323 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ba5d7-2d07-4268-8899-bb4ccfedb41f" containerName="mariadb-database-create" Sep 30 17:32:11 crc kubenswrapper[4688]: E0930 17:32:11.383352 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b944aae0-dfff-452a-8618-1ca22a36ae3a" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383365 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b944aae0-dfff-452a-8618-1ca22a36ae3a" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: E0930 17:32:11.383389 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6fa854-2454-4d36-a070-1315f0d36b09" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383402 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6fa854-2454-4d36-a070-1315f0d36b09" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383726 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b944aae0-dfff-452a-8618-1ca22a36ae3a" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383782 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="780ba5d7-2d07-4268-8899-bb4ccfedb41f" containerName="mariadb-database-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.383798 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6fa854-2454-4d36-a070-1315f0d36b09" containerName="mariadb-account-create" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.401656 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.404901 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.406053 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-v6d9x-config-6gjc7"] Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.473351 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.474204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.474368 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.474492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.474526 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.474559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrkn\" (UniqueName: \"kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.576884 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.576974 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.577040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.577063 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.577090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrkn\" (UniqueName: \"kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.577127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.577295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.578329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.578400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.578441 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.580120 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.597031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrkn\" (UniqueName: \"kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn\") pod \"ovn-controller-v6d9x-config-6gjc7\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.741413 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.560078356 podStartE2EDuration="1m1.741391922s" podCreationTimestamp="2025-09-30 17:31:10 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.65947166 +0000 UTC m=+943.954909218" lastFinishedPulling="2025-09-30 17:31:37.840785226 +0000 UTC m=+955.136222784" observedRunningTime="2025-09-30 17:32:11.471903089 +0000 UTC m=+988.767340657" watchObservedRunningTime="2025-09-30 17:32:11.741391922 +0000 UTC m=+989.036829480" Sep 30 17:32:11 crc kubenswrapper[4688]: I0930 17:32:11.893381 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:12 crc kubenswrapper[4688]: I0930 17:32:12.364209 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-v6d9x-config-6gjc7"] Sep 30 17:32:12 crc kubenswrapper[4688]: I0930 17:32:12.377371 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-v6d9x-config-6gjc7" event={"ID":"5469a8ef-625b-440a-a503-f1b738fa117c","Type":"ContainerStarted","Data":"f8bf684dce6f774346a32be336eac17a46c2f05d2aa2c017a1656ee67f284629"} Sep 30 17:32:12 crc kubenswrapper[4688]: I0930 17:32:12.380315 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerStarted","Data":"ee2346cdd1ae34be3c853bf0829ee468cea2299a295759b6eea684fa3fe690da"} Sep 30 17:32:12 crc kubenswrapper[4688]: I0930 17:32:12.380704 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:32:12 crc kubenswrapper[4688]: I0930 17:32:12.423595 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.86143552 podStartE2EDuration="1m1.423542242s" podCreationTimestamp="2025-09-30 17:31:11 +0000 UTC" firstStartedPulling="2025-09-30 17:31:26.673525473 +0000 UTC m=+943.968963031" lastFinishedPulling="2025-09-30 17:31:38.235632195 +0000 UTC m=+955.531069753" observedRunningTime="2025-09-30 17:32:12.412622697 +0000 UTC m=+989.708060255" watchObservedRunningTime="2025-09-30 17:32:12.423542242 +0000 UTC m=+989.718979810" Sep 30 17:32:13 crc kubenswrapper[4688]: I0930 17:32:13.392898 4688 generic.go:334] "Generic (PLEG): container finished" podID="5469a8ef-625b-440a-a503-f1b738fa117c" containerID="a555da79ecda92e047f32300c49a2e5e6cafa641eb8727fbd2e02278c351e4c7" exitCode=0 Sep 30 17:32:13 crc kubenswrapper[4688]: I0930 17:32:13.392968 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-v6d9x-config-6gjc7" event={"ID":"5469a8ef-625b-440a-a503-f1b738fa117c","Type":"ContainerDied","Data":"a555da79ecda92e047f32300c49a2e5e6cafa641eb8727fbd2e02278c351e4c7"} Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.703401 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741163 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljrkn\" (UniqueName: \"kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741237 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741296 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741387 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741417 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run\") pod \"5469a8ef-625b-440a-a503-f1b738fa117c\" (UID: \"5469a8ef-625b-440a-a503-f1b738fa117c\") " Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741721 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run" (OuterVolumeSpecName: "var-run") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741797 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.741872 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.742716 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.742900 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts" (OuterVolumeSpecName: "scripts") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.748775 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn" (OuterVolumeSpecName: "kube-api-access-ljrkn") pod "5469a8ef-625b-440a-a503-f1b738fa117c" (UID: "5469a8ef-625b-440a-a503-f1b738fa117c"). InnerVolumeSpecName "kube-api-access-ljrkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.843978 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljrkn\" (UniqueName: \"kubernetes.io/projected/5469a8ef-625b-440a-a503-f1b738fa117c-kube-api-access-ljrkn\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.844035 4688 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.844047 4688 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.844059 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5469a8ef-625b-440a-a503-f1b738fa117c-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.844071 4688 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:14 crc kubenswrapper[4688]: I0930 17:32:14.844082 4688 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5469a8ef-625b-440a-a503-f1b738fa117c-var-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:15 crc kubenswrapper[4688]: I0930 17:32:15.411294 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-v6d9x-config-6gjc7" event={"ID":"5469a8ef-625b-440a-a503-f1b738fa117c","Type":"ContainerDied","Data":"f8bf684dce6f774346a32be336eac17a46c2f05d2aa2c017a1656ee67f284629"} Sep 30 17:32:15 crc kubenswrapper[4688]: I0930 17:32:15.411791 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8bf684dce6f774346a32be336eac17a46c2f05d2aa2c017a1656ee67f284629" Sep 30 17:32:15 crc kubenswrapper[4688]: I0930 17:32:15.411389 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-v6d9x-config-6gjc7" Sep 30 17:32:15 crc kubenswrapper[4688]: I0930 17:32:15.843352 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-v6d9x-config-6gjc7"] Sep 30 17:32:15 crc kubenswrapper[4688]: I0930 17:32:15.851159 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-v6d9x-config-6gjc7"] Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.099917 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-v6d9x" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.203531 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7rrsk"] Sep 30 17:32:16 crc kubenswrapper[4688]: E0930 17:32:16.204139 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5469a8ef-625b-440a-a503-f1b738fa117c" containerName="ovn-config" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.204168 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5469a8ef-625b-440a-a503-f1b738fa117c" containerName="ovn-config" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.204459 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5469a8ef-625b-440a-a503-f1b738fa117c" containerName="ovn-config" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.205434 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.211812 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7rrsk"] Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.214199 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.220710 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcgjs" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.272497 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.272590 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.272649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.272670 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84cc\" (UniqueName: \"kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.374850 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.374953 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.374975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84cc\" (UniqueName: \"kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.375089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.381041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.381473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.381277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.396019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84cc\" (UniqueName: \"kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc\") pod \"glance-db-sync-7rrsk\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:16 crc kubenswrapper[4688]: I0930 17:32:16.525251 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:17 crc kubenswrapper[4688]: I0930 17:32:17.054744 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7rrsk"] Sep 30 17:32:17 crc kubenswrapper[4688]: I0930 17:32:17.442393 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5469a8ef-625b-440a-a503-f1b738fa117c" path="/var/lib/kubelet/pods/5469a8ef-625b-440a-a503-f1b738fa117c/volumes" Sep 30 17:32:17 crc kubenswrapper[4688]: I0930 17:32:17.443368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7rrsk" event={"ID":"41c69c99-2a0c-4f87-85c6-557b4e133567","Type":"ContainerStarted","Data":"9a6d86f1c820db89410a558a216c0e12337a743a37e0768e372f953217871be2"} Sep 30 17:32:21 crc kubenswrapper[4688]: I0930 17:32:21.067915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:32:21 crc kubenswrapper[4688]: E0930 17:32:21.068237 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:32:21 crc kubenswrapper[4688]: E0930 17:32:21.068617 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:32:21 crc kubenswrapper[4688]: E0930 17:32:21.068699 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:32:53.068675991 +0000 UTC m=+1030.364113549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.204862 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.516188 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nwcft"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.517547 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.532879 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nwcft"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.592923 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.658782 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7qnp8"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.660896 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.676046 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7qnp8"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.707132 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ldd\" (UniqueName: \"kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd\") pod \"cinder-db-create-nwcft\" (UID: \"50ffd3d0-11ce-4a79-acb8-540431aa7db1\") " pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.809609 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l859h\" (UniqueName: \"kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h\") pod \"barbican-db-create-7qnp8\" (UID: \"59a1e2c1-3d27-4e4a-810d-7c0385457374\") " pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.809721 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ldd\" (UniqueName: \"kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd\") pod \"cinder-db-create-nwcft\" (UID: \"50ffd3d0-11ce-4a79-acb8-540431aa7db1\") " pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.850968 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9x9pr"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.852757 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ldd\" (UniqueName: \"kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd\") pod \"cinder-db-create-nwcft\" (UID: \"50ffd3d0-11ce-4a79-acb8-540431aa7db1\") " pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.852997 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.911693 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l859h\" (UniqueName: \"kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h\") pod \"barbican-db-create-7qnp8\" (UID: \"59a1e2c1-3d27-4e4a-810d-7c0385457374\") " pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.911835 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddhd\" (UniqueName: \"kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd\") pod \"neutron-db-create-9x9pr\" (UID: \"1c56fdac-3acd-4d5f-b702-e25591b6112e\") " pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.925258 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9x9pr"] Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.935660 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.950696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l859h\" (UniqueName: \"kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h\") pod \"barbican-db-create-7qnp8\" (UID: \"59a1e2c1-3d27-4e4a-810d-7c0385457374\") " pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:22 crc kubenswrapper[4688]: I0930 17:32:22.996045 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:23 crc kubenswrapper[4688]: I0930 17:32:23.013535 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bddhd\" (UniqueName: \"kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd\") pod \"neutron-db-create-9x9pr\" (UID: \"1c56fdac-3acd-4d5f-b702-e25591b6112e\") " pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:23 crc kubenswrapper[4688]: I0930 17:32:23.038475 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bddhd\" (UniqueName: \"kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd\") pod \"neutron-db-create-9x9pr\" (UID: \"1c56fdac-3acd-4d5f-b702-e25591b6112e\") " pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:23 crc kubenswrapper[4688]: I0930 17:32:23.201250 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.389155 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e9f6-account-create-d4bqm"] Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.393073 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.395717 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.403558 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e9f6-account-create-d4bqm"] Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.462784 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sdw5\" (UniqueName: \"kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5\") pod \"keystone-e9f6-account-create-d4bqm\" (UID: \"18659cd0-889f-42a1-8c6a-297f6fd0cd68\") " pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.570171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sdw5\" (UniqueName: \"kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5\") pod \"keystone-e9f6-account-create-d4bqm\" (UID: \"18659cd0-889f-42a1-8c6a-297f6fd0cd68\") " pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.595888 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sdw5\" (UniqueName: \"kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5\") pod \"keystone-e9f6-account-create-d4bqm\" (UID: \"18659cd0-889f-42a1-8c6a-297f6fd0cd68\") " pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:25 crc kubenswrapper[4688]: I0930 17:32:25.729923 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.347535 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nwcft"] Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.357211 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e9f6-account-create-d4bqm"] Sep 30 17:32:30 crc kubenswrapper[4688]: W0930 17:32:30.360389 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50ffd3d0_11ce_4a79_acb8_540431aa7db1.slice/crio-38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286 WatchSource:0}: Error finding container 38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286: Status 404 returned error can't find the container with id 38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286 Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.431039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7qnp8"] Sep 30 17:32:30 crc kubenswrapper[4688]: W0930 17:32:30.448880 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a1e2c1_3d27_4e4a_810d_7c0385457374.slice/crio-511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905 WatchSource:0}: Error finding container 511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905: Status 404 returned error can't find the container with id 511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905 Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.520968 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9x9pr"] Sep 30 17:32:30 crc kubenswrapper[4688]: W0930 17:32:30.532824 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c56fdac_3acd_4d5f_b702_e25591b6112e.slice/crio-303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43 WatchSource:0}: Error finding container 303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43: Status 404 returned error can't find the container with id 303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43 Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.566764 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9x9pr" event={"ID":"1c56fdac-3acd-4d5f-b702-e25591b6112e","Type":"ContainerStarted","Data":"303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43"} Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.568943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9f6-account-create-d4bqm" event={"ID":"18659cd0-889f-42a1-8c6a-297f6fd0cd68","Type":"ContainerStarted","Data":"247dfe26ac57d5705712b4bab6073fc60fbf790fb17028b1b3b4b00a2f40f47d"} Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.571924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7qnp8" event={"ID":"59a1e2c1-3d27-4e4a-810d-7c0385457374","Type":"ContainerStarted","Data":"511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905"} Sep 30 17:32:30 crc kubenswrapper[4688]: I0930 17:32:30.577183 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwcft" event={"ID":"50ffd3d0-11ce-4a79-acb8-540431aa7db1","Type":"ContainerStarted","Data":"38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.590199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9x9pr" event={"ID":"1c56fdac-3acd-4d5f-b702-e25591b6112e","Type":"ContainerDied","Data":"a04242423727dfae0acb0628b67a8c724e41e23a4bdc74489b4ff074306e7bb7"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.590071 4688 generic.go:334] "Generic (PLEG): container finished" podID="1c56fdac-3acd-4d5f-b702-e25591b6112e" containerID="a04242423727dfae0acb0628b67a8c724e41e23a4bdc74489b4ff074306e7bb7" exitCode=0 Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.596156 4688 generic.go:334] "Generic (PLEG): container finished" podID="18659cd0-889f-42a1-8c6a-297f6fd0cd68" containerID="aebc8f2a37807f3ffd5241f969ab7aa8a5e0130bc59c1133b7928a670a6c94b9" exitCode=0 Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.596246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9f6-account-create-d4bqm" event={"ID":"18659cd0-889f-42a1-8c6a-297f6fd0cd68","Type":"ContainerDied","Data":"aebc8f2a37807f3ffd5241f969ab7aa8a5e0130bc59c1133b7928a670a6c94b9"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.601156 4688 generic.go:334] "Generic (PLEG): container finished" podID="59a1e2c1-3d27-4e4a-810d-7c0385457374" containerID="6d772c022d00a39b6e2a0eadd7e9a805c2b360c22af455dea013c90ed2613980" exitCode=0 Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.601262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7qnp8" event={"ID":"59a1e2c1-3d27-4e4a-810d-7c0385457374","Type":"ContainerDied","Data":"6d772c022d00a39b6e2a0eadd7e9a805c2b360c22af455dea013c90ed2613980"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.603030 4688 generic.go:334] "Generic (PLEG): container finished" podID="50ffd3d0-11ce-4a79-acb8-540431aa7db1" containerID="840beba06d7b111f3d3ba2cbb4c89f8a333856abd6a564e78c4bad157e3bd4bf" exitCode=0 Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.603104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwcft" event={"ID":"50ffd3d0-11ce-4a79-acb8-540431aa7db1","Type":"ContainerDied","Data":"840beba06d7b111f3d3ba2cbb4c89f8a333856abd6a564e78c4bad157e3bd4bf"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.612717 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7rrsk" event={"ID":"41c69c99-2a0c-4f87-85c6-557b4e133567","Type":"ContainerStarted","Data":"272e5aba80cc2cc4cc59a5fef26ded3fbd7cc0f3ba9f1c386b1c89b50559908b"} Sep 30 17:32:31 crc kubenswrapper[4688]: I0930 17:32:31.637658 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7rrsk" podStartSLOduration=2.778282112 podStartE2EDuration="15.637633013s" podCreationTimestamp="2025-09-30 17:32:16 +0000 UTC" firstStartedPulling="2025-09-30 17:32:17.063520061 +0000 UTC m=+994.358957619" lastFinishedPulling="2025-09-30 17:32:29.922870962 +0000 UTC m=+1007.218308520" observedRunningTime="2025-09-30 17:32:31.63355508 +0000 UTC m=+1008.928992658" watchObservedRunningTime="2025-09-30 17:32:31.637633013 +0000 UTC m=+1008.933070571" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.018029 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.118721 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.137878 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bddhd\" (UniqueName: \"kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd\") pod \"1c56fdac-3acd-4d5f-b702-e25591b6112e\" (UID: \"1c56fdac-3acd-4d5f-b702-e25591b6112e\") " Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.138003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sdw5\" (UniqueName: \"kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5\") pod \"18659cd0-889f-42a1-8c6a-297f6fd0cd68\" (UID: \"18659cd0-889f-42a1-8c6a-297f6fd0cd68\") " Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.148888 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.153828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5" (OuterVolumeSpecName: "kube-api-access-7sdw5") pod "18659cd0-889f-42a1-8c6a-297f6fd0cd68" (UID: "18659cd0-889f-42a1-8c6a-297f6fd0cd68"). InnerVolumeSpecName "kube-api-access-7sdw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.154384 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.155133 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd" (OuterVolumeSpecName: "kube-api-access-bddhd") pod "1c56fdac-3acd-4d5f-b702-e25591b6112e" (UID: "1c56fdac-3acd-4d5f-b702-e25591b6112e"). InnerVolumeSpecName "kube-api-access-bddhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.239762 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9ldd\" (UniqueName: \"kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd\") pod \"50ffd3d0-11ce-4a79-acb8-540431aa7db1\" (UID: \"50ffd3d0-11ce-4a79-acb8-540431aa7db1\") " Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.239849 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l859h\" (UniqueName: \"kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h\") pod \"59a1e2c1-3d27-4e4a-810d-7c0385457374\" (UID: \"59a1e2c1-3d27-4e4a-810d-7c0385457374\") " Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.240326 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bddhd\" (UniqueName: \"kubernetes.io/projected/1c56fdac-3acd-4d5f-b702-e25591b6112e-kube-api-access-bddhd\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.240346 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sdw5\" (UniqueName: \"kubernetes.io/projected/18659cd0-889f-42a1-8c6a-297f6fd0cd68-kube-api-access-7sdw5\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.244260 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd" (OuterVolumeSpecName: "kube-api-access-v9ldd") pod "50ffd3d0-11ce-4a79-acb8-540431aa7db1" (UID: "50ffd3d0-11ce-4a79-acb8-540431aa7db1"). InnerVolumeSpecName "kube-api-access-v9ldd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.244830 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h" (OuterVolumeSpecName: "kube-api-access-l859h") pod "59a1e2c1-3d27-4e4a-810d-7c0385457374" (UID: "59a1e2c1-3d27-4e4a-810d-7c0385457374"). InnerVolumeSpecName "kube-api-access-l859h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.342182 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9ldd\" (UniqueName: \"kubernetes.io/projected/50ffd3d0-11ce-4a79-acb8-540431aa7db1-kube-api-access-v9ldd\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.342653 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l859h\" (UniqueName: \"kubernetes.io/projected/59a1e2c1-3d27-4e4a-810d-7c0385457374-kube-api-access-l859h\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.639012 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9x9pr" event={"ID":"1c56fdac-3acd-4d5f-b702-e25591b6112e","Type":"ContainerDied","Data":"303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43"} Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.639062 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="303c2b6a9a0a3d3fae16211fe888fbe552691ddfc9852f87c4f6314a14afea43" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.639090 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9x9pr" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.642374 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e9f6-account-create-d4bqm" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.642402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e9f6-account-create-d4bqm" event={"ID":"18659cd0-889f-42a1-8c6a-297f6fd0cd68","Type":"ContainerDied","Data":"247dfe26ac57d5705712b4bab6073fc60fbf790fb17028b1b3b4b00a2f40f47d"} Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.642446 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247dfe26ac57d5705712b4bab6073fc60fbf790fb17028b1b3b4b00a2f40f47d" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.644494 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7qnp8" event={"ID":"59a1e2c1-3d27-4e4a-810d-7c0385457374","Type":"ContainerDied","Data":"511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905"} Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.644515 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7qnp8" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.644518 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511c89bda8530d17404b1eb9751cec8bc27272e49db36cefc67af2175d39b905" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.648219 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwcft" event={"ID":"50ffd3d0-11ce-4a79-acb8-540431aa7db1","Type":"ContainerDied","Data":"38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286"} Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.648272 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c86836eda8b3153ea1a31f4cbe1e608b0254bfbde0e7d471edb85cd378f286" Sep 30 17:32:33 crc kubenswrapper[4688]: I0930 17:32:33.648355 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwcft" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.971046 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f4dwg"] Sep 30 17:32:35 crc kubenswrapper[4688]: E0930 17:32:35.972185 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a1e2c1-3d27-4e4a-810d-7c0385457374" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972206 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a1e2c1-3d27-4e4a-810d-7c0385457374" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: E0930 17:32:35.972230 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c56fdac-3acd-4d5f-b702-e25591b6112e" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972238 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c56fdac-3acd-4d5f-b702-e25591b6112e" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: E0930 17:32:35.972263 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ffd3d0-11ce-4a79-acb8-540431aa7db1" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972271 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ffd3d0-11ce-4a79-acb8-540431aa7db1" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: E0930 17:32:35.972285 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18659cd0-889f-42a1-8c6a-297f6fd0cd68" containerName="mariadb-account-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972292 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="18659cd0-889f-42a1-8c6a-297f6fd0cd68" containerName="mariadb-account-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972494 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="18659cd0-889f-42a1-8c6a-297f6fd0cd68" containerName="mariadb-account-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972515 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c56fdac-3acd-4d5f-b702-e25591b6112e" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972523 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a1e2c1-3d27-4e4a-810d-7c0385457374" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.972531 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ffd3d0-11ce-4a79-acb8-540431aa7db1" containerName="mariadb-database-create" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.973288 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.976495 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jhgvn" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.976665 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.977654 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:32:35 crc kubenswrapper[4688]: I0930 17:32:35.978069 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.000007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.000077 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.000115 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nm2\" (UniqueName: \"kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.033011 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4dwg"] Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.102138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.102231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.102275 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7nm2\" (UniqueName: \"kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.111933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.114448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.130311 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7nm2\" (UniqueName: \"kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2\") pod \"keystone-db-sync-f4dwg\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.301923 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:36 crc kubenswrapper[4688]: I0930 17:32:36.805594 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4dwg"] Sep 30 17:32:37 crc kubenswrapper[4688]: I0930 17:32:37.704746 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4dwg" event={"ID":"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f","Type":"ContainerStarted","Data":"0a3b3ec368f9def18767696a6682d93bad557f23d794b7114c2c9422ea9f4e30"} Sep 30 17:32:40 crc kubenswrapper[4688]: I0930 17:32:40.733394 4688 generic.go:334] "Generic (PLEG): container finished" podID="41c69c99-2a0c-4f87-85c6-557b4e133567" containerID="272e5aba80cc2cc4cc59a5fef26ded3fbd7cc0f3ba9f1c386b1c89b50559908b" exitCode=0 Sep 30 17:32:40 crc kubenswrapper[4688]: I0930 17:32:40.733469 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7rrsk" event={"ID":"41c69c99-2a0c-4f87-85c6-557b4e133567","Type":"ContainerDied","Data":"272e5aba80cc2cc4cc59a5fef26ded3fbd7cc0f3ba9f1c386b1c89b50559908b"} Sep 30 17:32:41 crc kubenswrapper[4688]: I0930 17:32:41.517815 4688 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb944aae0-dfff-452a-8618-1ca22a36ae3a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb944aae0-dfff-452a-8618-1ca22a36ae3a] : Timed out while waiting for systemd to remove kubepods-besteffort-podb944aae0_dfff_452a_8618_1ca22a36ae3a.slice" Sep 30 17:32:41 crc kubenswrapper[4688]: E0930 17:32:41.518246 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb944aae0-dfff-452a-8618-1ca22a36ae3a] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb944aae0-dfff-452a-8618-1ca22a36ae3a] : Timed out while waiting for systemd to remove kubepods-besteffort-podb944aae0_dfff_452a_8618_1ca22a36ae3a.slice" pod="openstack/placement-05f7-account-create-fj4md" podUID="b944aae0-dfff-452a-8618-1ca22a36ae3a" Sep 30 17:32:41 crc kubenswrapper[4688]: I0930 17:32:41.742093 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-05f7-account-create-fj4md" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.268873 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.362615 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data\") pod \"41c69c99-2a0c-4f87-85c6-557b4e133567\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.362747 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle\") pod \"41c69c99-2a0c-4f87-85c6-557b4e133567\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.362783 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84cc\" (UniqueName: \"kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc\") pod \"41c69c99-2a0c-4f87-85c6-557b4e133567\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.362865 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data\") pod \"41c69c99-2a0c-4f87-85c6-557b4e133567\" (UID: \"41c69c99-2a0c-4f87-85c6-557b4e133567\") " Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.370965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "41c69c99-2a0c-4f87-85c6-557b4e133567" (UID: "41c69c99-2a0c-4f87-85c6-557b4e133567"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.378359 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc" (OuterVolumeSpecName: "kube-api-access-v84cc") pod "41c69c99-2a0c-4f87-85c6-557b4e133567" (UID: "41c69c99-2a0c-4f87-85c6-557b4e133567"). InnerVolumeSpecName "kube-api-access-v84cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.392983 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c69c99-2a0c-4f87-85c6-557b4e133567" (UID: "41c69c99-2a0c-4f87-85c6-557b4e133567"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.418733 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data" (OuterVolumeSpecName: "config-data") pod "41c69c99-2a0c-4f87-85c6-557b4e133567" (UID: "41c69c99-2a0c-4f87-85c6-557b4e133567"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.465259 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.465307 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.465319 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84cc\" (UniqueName: \"kubernetes.io/projected/41c69c99-2a0c-4f87-85c6-557b4e133567-kube-api-access-v84cc\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.465334 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c69c99-2a0c-4f87-85c6-557b4e133567-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.578537 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fd3a-account-create-fhg2v"] Sep 30 17:32:42 crc kubenswrapper[4688]: E0930 17:32:42.580090 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c69c99-2a0c-4f87-85c6-557b4e133567" containerName="glance-db-sync" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.580117 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c69c99-2a0c-4f87-85c6-557b4e133567" containerName="glance-db-sync" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.580335 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c69c99-2a0c-4f87-85c6-557b4e133567" containerName="glance-db-sync" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.581227 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.583896 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.589646 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fd3a-account-create-fhg2v"] Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.668868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqpq\" (UniqueName: \"kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq\") pod \"cinder-fd3a-account-create-fhg2v\" (UID: \"48054514-6a95-4e97-be70-14945f8f0896\") " pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.757976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7rrsk" event={"ID":"41c69c99-2a0c-4f87-85c6-557b4e133567","Type":"ContainerDied","Data":"9a6d86f1c820db89410a558a216c0e12337a743a37e0768e372f953217871be2"} Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.758409 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a6d86f1c820db89410a558a216c0e12337a743a37e0768e372f953217871be2" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.757991 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7rrsk" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.772082 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lqpq\" (UniqueName: \"kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq\") pod \"cinder-fd3a-account-create-fhg2v\" (UID: \"48054514-6a95-4e97-be70-14945f8f0896\") " pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.780005 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4dwg" event={"ID":"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f","Type":"ContainerStarted","Data":"ec55c22ff30029c7818869185b6032b0c0831159ac305704af975aef49f18fbe"} Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.786019 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4ebc-account-create-vw6b5"] Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.794183 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lqpq\" (UniqueName: \"kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq\") pod \"cinder-fd3a-account-create-fhg2v\" (UID: \"48054514-6a95-4e97-be70-14945f8f0896\") " pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.796646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.800706 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.806433 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4ebc-account-create-vw6b5"] Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.808945 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f4dwg" podStartSLOduration=2.106105934 podStartE2EDuration="7.808935075s" podCreationTimestamp="2025-09-30 17:32:35 +0000 UTC" firstStartedPulling="2025-09-30 17:32:36.814040913 +0000 UTC m=+1014.109478471" lastFinishedPulling="2025-09-30 17:32:42.516870054 +0000 UTC m=+1019.812307612" observedRunningTime="2025-09-30 17:32:42.801427386 +0000 UTC m=+1020.096864944" watchObservedRunningTime="2025-09-30 17:32:42.808935075 +0000 UTC m=+1020.104372633" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.874504 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcd5f\" (UniqueName: \"kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f\") pod \"barbican-4ebc-account-create-vw6b5\" (UID: \"fcd9b069-6115-4396-bf19-d5567a1ee8b9\") " pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.927166 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.976616 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcd5f\" (UniqueName: \"kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f\") pod \"barbican-4ebc-account-create-vw6b5\" (UID: \"fcd9b069-6115-4396-bf19-d5567a1ee8b9\") " pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.997411 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-583c-account-create-qmjqp"] Sep 30 17:32:42 crc kubenswrapper[4688]: I0930 17:32:42.999943 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.003185 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.016774 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcd5f\" (UniqueName: \"kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f\") pod \"barbican-4ebc-account-create-vw6b5\" (UID: \"fcd9b069-6115-4396-bf19-d5567a1ee8b9\") " pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.042726 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-583c-account-create-qmjqp"] Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.083309 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5zt\" (UniqueName: \"kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt\") pod \"neutron-583c-account-create-qmjqp\" (UID: \"257f987c-0c66-44cf-9d24-7e9ca1855086\") " pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.124031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.186332 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5zt\" (UniqueName: \"kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt\") pod \"neutron-583c-account-create-qmjqp\" (UID: \"257f987c-0c66-44cf-9d24-7e9ca1855086\") " pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.210307 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.212447 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.226417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.242601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5zt\" (UniqueName: \"kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt\") pod \"neutron-583c-account-create-qmjqp\" (UID: \"257f987c-0c66-44cf-9d24-7e9ca1855086\") " pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.288390 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hxv\" (UniqueName: \"kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.288544 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.288652 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.288703 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.288742 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.343265 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.390701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.390779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.390813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.390861 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hxv\" (UniqueName: \"kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.390899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.392302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.393069 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.393718 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.404336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.415397 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hxv\" (UniqueName: \"kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv\") pod \"dnsmasq-dns-5b946c75cc-5rtbp\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.564765 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4ebc-account-create-vw6b5"] Sep 30 17:32:43 crc kubenswrapper[4688]: W0930 17:32:43.569769 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd9b069_6115_4396_bf19_d5567a1ee8b9.slice/crio-f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376 WatchSource:0}: Error finding container f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376: Status 404 returned error can't find the container with id f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376 Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.583285 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.644185 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.659342 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fd3a-account-create-fhg2v"] Sep 30 17:32:43 crc kubenswrapper[4688]: W0930 17:32:43.681668 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48054514_6a95_4e97_be70_14945f8f0896.slice/crio-e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3 WatchSource:0}: Error finding container e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3: Status 404 returned error can't find the container with id e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3 Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.690674 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.811908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ebc-account-create-vw6b5" event={"ID":"fcd9b069-6115-4396-bf19-d5567a1ee8b9","Type":"ContainerStarted","Data":"f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376"} Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.814087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fd3a-account-create-fhg2v" event={"ID":"48054514-6a95-4e97-be70-14945f8f0896","Type":"ContainerStarted","Data":"e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3"} Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.936042 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.967247 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-583c-account-create-qmjqp"] Sep 30 17:32:43 crc kubenswrapper[4688]: W0930 17:32:43.990315 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257f987c_0c66_44cf_9d24_7e9ca1855086.slice/crio-98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0 WatchSource:0}: Error finding container 98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0: Status 404 returned error can't find the container with id 98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0 Sep 30 17:32:43 crc kubenswrapper[4688]: I0930 17:32:43.997285 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 30 17:32:44 crc kubenswrapper[4688]: I0930 17:32:44.824871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-583c-account-create-qmjqp" event={"ID":"257f987c-0c66-44cf-9d24-7e9ca1855086","Type":"ContainerStarted","Data":"98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0"} Sep 30 17:32:44 crc kubenswrapper[4688]: I0930 17:32:44.826072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" event={"ID":"480fb91d-2974-4178-9d2d-6e2b074dd673","Type":"ContainerStarted","Data":"b58cc160060c8b2dd88c2b87d391568c6470de879655fc31bda8d14bf07e674a"} Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.862524 4688 generic.go:334] "Generic (PLEG): container finished" podID="257f987c-0c66-44cf-9d24-7e9ca1855086" containerID="cd1c90e114797797574e94c4068be898222df2893b6947d587273eac37169a98" exitCode=0 Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.862614 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-583c-account-create-qmjqp" event={"ID":"257f987c-0c66-44cf-9d24-7e9ca1855086","Type":"ContainerDied","Data":"cd1c90e114797797574e94c4068be898222df2893b6947d587273eac37169a98"} Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.866751 4688 generic.go:334] "Generic (PLEG): container finished" podID="fcd9b069-6115-4396-bf19-d5567a1ee8b9" containerID="166f8f899bb368511d02c4ed34d4811679b37e0c41a24ad7df4c15d0fe43ed42" exitCode=0 Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.866827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ebc-account-create-vw6b5" event={"ID":"fcd9b069-6115-4396-bf19-d5567a1ee8b9","Type":"ContainerDied","Data":"166f8f899bb368511d02c4ed34d4811679b37e0c41a24ad7df4c15d0fe43ed42"} Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.868682 4688 generic.go:334] "Generic (PLEG): container finished" podID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerID="182ea98469db85aa8a01966f1324b4bb8689ad7ca051c346749d825e67c69a11" exitCode=0 Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.868771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" event={"ID":"480fb91d-2974-4178-9d2d-6e2b074dd673","Type":"ContainerDied","Data":"182ea98469db85aa8a01966f1324b4bb8689ad7ca051c346749d825e67c69a11"} Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.870951 4688 generic.go:334] "Generic (PLEG): container finished" podID="48054514-6a95-4e97-be70-14945f8f0896" containerID="c83776b1ae18e3a441c563245fa153fe3b8490d430fb5d5cacefc495419f2d87" exitCode=0 Sep 30 17:32:46 crc kubenswrapper[4688]: I0930 17:32:46.870980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fd3a-account-create-fhg2v" event={"ID":"48054514-6a95-4e97-be70-14945f8f0896","Type":"ContainerDied","Data":"c83776b1ae18e3a441c563245fa153fe3b8490d430fb5d5cacefc495419f2d87"} Sep 30 17:32:47 crc kubenswrapper[4688]: I0930 17:32:47.883177 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" event={"ID":"480fb91d-2974-4178-9d2d-6e2b074dd673","Type":"ContainerStarted","Data":"f70759bf94bec1ee42d088830c9ca3314a7208f026871e463c94ddc833208c4c"} Sep 30 17:32:47 crc kubenswrapper[4688]: I0930 17:32:47.883724 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:47 crc kubenswrapper[4688]: I0930 17:32:47.922828 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" podStartSLOduration=4.922803753 podStartE2EDuration="4.922803753s" podCreationTimestamp="2025-09-30 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:47.90760207 +0000 UTC m=+1025.203039668" watchObservedRunningTime="2025-09-30 17:32:47.922803753 +0000 UTC m=+1025.218241311" Sep 30 17:32:48 crc kubenswrapper[4688]: E0930 17:32:48.206505 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df0f6a0_3589_4f52_a7e8_8bf54ad1c09f.slice/crio-ec55c22ff30029c7818869185b6032b0c0831159ac305704af975aef49f18fbe.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.333526 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.341450 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.355073 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.452937 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcd5f\" (UniqueName: \"kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f\") pod \"fcd9b069-6115-4396-bf19-d5567a1ee8b9\" (UID: \"fcd9b069-6115-4396-bf19-d5567a1ee8b9\") " Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.453087 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lqpq\" (UniqueName: \"kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq\") pod \"48054514-6a95-4e97-be70-14945f8f0896\" (UID: \"48054514-6a95-4e97-be70-14945f8f0896\") " Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.453249 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk5zt\" (UniqueName: \"kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt\") pod \"257f987c-0c66-44cf-9d24-7e9ca1855086\" (UID: \"257f987c-0c66-44cf-9d24-7e9ca1855086\") " Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.465891 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt" (OuterVolumeSpecName: "kube-api-access-bk5zt") pod "257f987c-0c66-44cf-9d24-7e9ca1855086" (UID: "257f987c-0c66-44cf-9d24-7e9ca1855086"). InnerVolumeSpecName "kube-api-access-bk5zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.465972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f" (OuterVolumeSpecName: "kube-api-access-fcd5f") pod "fcd9b069-6115-4396-bf19-d5567a1ee8b9" (UID: "fcd9b069-6115-4396-bf19-d5567a1ee8b9"). InnerVolumeSpecName "kube-api-access-fcd5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.469277 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq" (OuterVolumeSpecName: "kube-api-access-5lqpq") pod "48054514-6a95-4e97-be70-14945f8f0896" (UID: "48054514-6a95-4e97-be70-14945f8f0896"). InnerVolumeSpecName "kube-api-access-5lqpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.555853 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcd5f\" (UniqueName: \"kubernetes.io/projected/fcd9b069-6115-4396-bf19-d5567a1ee8b9-kube-api-access-fcd5f\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.555897 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lqpq\" (UniqueName: \"kubernetes.io/projected/48054514-6a95-4e97-be70-14945f8f0896-kube-api-access-5lqpq\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.555906 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk5zt\" (UniqueName: \"kubernetes.io/projected/257f987c-0c66-44cf-9d24-7e9ca1855086-kube-api-access-bk5zt\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.893355 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-583c-account-create-qmjqp" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.893439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-583c-account-create-qmjqp" event={"ID":"257f987c-0c66-44cf-9d24-7e9ca1855086","Type":"ContainerDied","Data":"98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0"} Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.893977 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d21430c13bfdcb3985ccc8fd90d2135af243654f91095a64dc97e1312f07b0" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.895557 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4ebc-account-create-vw6b5" event={"ID":"fcd9b069-6115-4396-bf19-d5567a1ee8b9","Type":"ContainerDied","Data":"f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376"} Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.895608 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4955fef09e0e55814e6b96492ca34fe5a7a4bb425be5c02fc9ba7993fd2b376" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.895647 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4ebc-account-create-vw6b5" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.897825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fd3a-account-create-fhg2v" event={"ID":"48054514-6a95-4e97-be70-14945f8f0896","Type":"ContainerDied","Data":"e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3"} Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.897879 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3bc34531397b5d4d5a1d58e0d42d20919fbccacd79cc492ba7a9a5bae76fcb3" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.897845 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fd3a-account-create-fhg2v" Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.899371 4688 generic.go:334] "Generic (PLEG): container finished" podID="6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" containerID="ec55c22ff30029c7818869185b6032b0c0831159ac305704af975aef49f18fbe" exitCode=0 Sep 30 17:32:48 crc kubenswrapper[4688]: I0930 17:32:48.899631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4dwg" event={"ID":"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f","Type":"ContainerDied","Data":"ec55c22ff30029c7818869185b6032b0c0831159ac305704af975aef49f18fbe"} Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.241237 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.396443 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nm2\" (UniqueName: \"kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2\") pod \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.396552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data\") pod \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.396711 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle\") pod \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\" (UID: \"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f\") " Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.402391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2" (OuterVolumeSpecName: "kube-api-access-d7nm2") pod "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" (UID: "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f"). InnerVolumeSpecName "kube-api-access-d7nm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.426278 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" (UID: "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.444555 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data" (OuterVolumeSpecName: "config-data") pod "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" (UID: "6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.499851 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7nm2\" (UniqueName: \"kubernetes.io/projected/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-kube-api-access-d7nm2\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.499896 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.499911 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.924506 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4dwg" event={"ID":"6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f","Type":"ContainerDied","Data":"0a3b3ec368f9def18767696a6682d93bad557f23d794b7114c2c9422ea9f4e30"} Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.924559 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3b3ec368f9def18767696a6682d93bad557f23d794b7114c2c9422ea9f4e30" Sep 30 17:32:50 crc kubenswrapper[4688]: I0930 17:32:50.925062 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4dwg" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.250689 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.251731 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="dnsmasq-dns" containerID="cri-o://f70759bf94bec1ee42d088830c9ca3314a7208f026871e463c94ddc833208c4c" gracePeriod=10 Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.317660 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:51 crc kubenswrapper[4688]: E0930 17:32:51.318230 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd9b069-6115-4396-bf19-d5567a1ee8b9" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318256 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd9b069-6115-4396-bf19-d5567a1ee8b9" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: E0930 17:32:51.318280 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48054514-6a95-4e97-be70-14945f8f0896" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318289 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="48054514-6a95-4e97-be70-14945f8f0896" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: E0930 17:32:51.318304 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257f987c-0c66-44cf-9d24-7e9ca1855086" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318312 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="257f987c-0c66-44cf-9d24-7e9ca1855086" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: E0930 17:32:51.318345 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" containerName="keystone-db-sync" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318353 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" containerName="keystone-db-sync" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318689 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="48054514-6a95-4e97-be70-14945f8f0896" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318725 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" containerName="keystone-db-sync" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318742 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="257f987c-0c66-44cf-9d24-7e9ca1855086" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.318756 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd9b069-6115-4396-bf19-d5567a1ee8b9" containerName="mariadb-account-create" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.319883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.333351 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p2rqt"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.334619 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.338719 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.338916 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.339157 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jhgvn" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.339338 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.391670 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2rqt"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.415197 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418199 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418277 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418364 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418496 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418530 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418811 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzct5\" (UniqueName: \"kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418867 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.418933 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.419030 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c8zs\" (UniqueName: \"kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.519493 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520634 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzct5\" (UniqueName: \"kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520718 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520759 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c8zs\" (UniqueName: \"kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520953 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.520987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.521241 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.528944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.530045 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.530355 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9jmbt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.530759 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.531592 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.532984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.536714 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.543824 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.547466 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.547779 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.549408 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.555103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.557325 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.577104 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.592882 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzct5\" (UniqueName: \"kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5\") pod \"keystone-bootstrap-p2rqt\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.602329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c8zs\" (UniqueName: \"kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs\") pod \"dnsmasq-dns-784f69c749-9vpsh\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.625287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.625344 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.625397 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.625418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkk7\" (UniqueName: \"kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.625506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.666075 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.729121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.729275 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.729320 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.729359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.729387 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgkk7\" (UniqueName: \"kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.730178 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.730778 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.731162 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.741469 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.765833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgkk7\" (UniqueName: \"kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.786019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key\") pod \"horizon-87885ffcc-xqwzc\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.824800 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.857483 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.871928 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.878697 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.878731 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.945517 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.953800 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.988205 4688 generic.go:334] "Generic (PLEG): container finished" podID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerID="f70759bf94bec1ee42d088830c9ca3314a7208f026871e463c94ddc833208c4c" exitCode=0 Sep 30 17:32:51 crc kubenswrapper[4688]: I0930 17:32:51.988294 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" event={"ID":"480fb91d-2974-4178-9d2d-6e2b074dd673","Type":"ContainerDied","Data":"f70759bf94bec1ee42d088830c9ca3314a7208f026871e463c94ddc833208c4c"} Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.001101 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.030696 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjr2\" (UniqueName: \"kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062524 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062560 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062619 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062646 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062718 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062858 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.063536 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.063584 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.063648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4h68\" (UniqueName: \"kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.063693 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.062930 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.092619 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.094215 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.109249 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7f8lt"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.110484 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.118468 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.119494 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w6vl7" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.119816 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.133688 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7f8lt"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.144200 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165743 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165783 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165814 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165858 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4h68\" (UniqueName: \"kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165931 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktjr2\" (UniqueName: \"kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.165986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.166019 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.166044 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.166084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.166116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.166231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.168887 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.171789 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.175956 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.176953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.181082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.181410 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.181439 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcgjs" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.181844 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.181861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.184310 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.186855 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.187408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.188202 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.190010 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.203127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktjr2\" (UniqueName: \"kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.206735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.217558 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.223037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4h68\" (UniqueName: \"kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68\") pod \"horizon-5668f89c89-nk5pt\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.227999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.267946 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268059 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268090 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kjr\" (UniqueName: \"kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268157 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268252 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268282 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjccw\" (UniqueName: \"kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268413 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268442 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268496 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268522 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268562 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268627 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268676 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268707 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s8fv\" (UniqueName: \"kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.268781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.290883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370101 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370187 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370212 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370234 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s8fv\" (UniqueName: \"kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370339 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370423 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370446 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kjr\" (UniqueName: \"kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370468 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370499 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjccw\" (UniqueName: \"kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370685 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370715 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370768 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.370807 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.377866 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.378712 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.379975 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.380501 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.380649 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.381309 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.386414 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.387535 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.387932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.389336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.392226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.392354 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.395035 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.395759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.403030 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.404317 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s8fv\" (UniqueName: \"kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.406147 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kjr\" (UniqueName: \"kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr\") pod \"dnsmasq-dns-f84976bdf-5ljcb\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.414875 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjccw\" (UniqueName: \"kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw\") pod \"placement-db-sync-7f8lt\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.433234 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.472872 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.489866 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7f8lt" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.532773 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.541955 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.644600 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.646536 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.652340 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.652344 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.654626 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.743732 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.781962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782036 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782081 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782151 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.782244 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sz7m\" (UniqueName: \"kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.836562 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-c6cps"] Sep 30 17:32:52 crc kubenswrapper[4688]: E0930 17:32:52.837225 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="init" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.837252 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="init" Sep 30 17:32:52 crc kubenswrapper[4688]: E0930 17:32:52.837307 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="dnsmasq-dns" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.837318 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="dnsmasq-dns" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.837616 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" containerName="dnsmasq-dns" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.838549 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.843065 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.843383 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-btfmv" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.843723 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.848288 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c6cps"] Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.887643 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc\") pod \"480fb91d-2974-4178-9d2d-6e2b074dd673\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.887855 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hxv\" (UniqueName: \"kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv\") pod \"480fb91d-2974-4178-9d2d-6e2b074dd673\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.887921 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb\") pod \"480fb91d-2974-4178-9d2d-6e2b074dd673\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.888003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config\") pod \"480fb91d-2974-4178-9d2d-6e2b074dd673\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.888130 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb\") pod \"480fb91d-2974-4178-9d2d-6e2b074dd673\" (UID: \"480fb91d-2974-4178-9d2d-6e2b074dd673\") " Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.916775 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.917529 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.917696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.917889 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.919001 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.919039 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.919095 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.919554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sz7m\" (UniqueName: \"kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.921066 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv" (OuterVolumeSpecName: "kube-api-access-v7hxv") pod "480fb91d-2974-4178-9d2d-6e2b074dd673" (UID: "480fb91d-2974-4178-9d2d-6e2b074dd673"). InnerVolumeSpecName "kube-api-access-v7hxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.918494 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.924521 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.936634 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.946899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.954580 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.977960 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.997394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sz7m\" (UniqueName: \"kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:52 crc kubenswrapper[4688]: I0930 17:32:52.998361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.026998 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027054 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027109 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027254 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rwj\" (UniqueName: \"kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027282 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.027704 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hxv\" (UniqueName: \"kubernetes.io/projected/480fb91d-2974-4178-9d2d-6e2b074dd673-kube-api-access-v7hxv\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.070536 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" event={"ID":"6cdc0750-2437-4608-aeb2-57c6d2ee76ef","Type":"ContainerStarted","Data":"50a220ffad2391921d10cdb1b7eaed9724e51984558eb02bcb152659ad900d3c"} Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.089848 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2rqt"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.110551 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" event={"ID":"480fb91d-2974-4178-9d2d-6e2b074dd673","Type":"ContainerDied","Data":"b58cc160060c8b2dd88c2b87d391568c6470de879655fc31bda8d14bf07e674a"} Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.110929 4688 scope.go:117] "RemoveContainer" containerID="f70759bf94bec1ee42d088830c9ca3314a7208f026871e463c94ddc833208c4c" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.111091 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-5rtbp" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.131899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.132093 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rwj\" (UniqueName: \"kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.132234 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.132459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.132657 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.132819 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.133021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: E0930 17:32:53.162960 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:32:53 crc kubenswrapper[4688]: E0930 17:32:53.163011 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:32:53 crc kubenswrapper[4688]: E0930 17:32:53.163095 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:33:57.163065531 +0000 UTC m=+1094.458503089 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.170666 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.191830 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.235174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.236314 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.236459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.238912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.273532 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.276072 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.305045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rwj\" (UniqueName: \"kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj\") pod \"cinder-db-sync-c6cps\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.319266 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fxx6x"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.321176 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.326493 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.329595 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.340198 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jw2t8" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.358337 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "480fb91d-2974-4178-9d2d-6e2b074dd673" (UID: "480fb91d-2974-4178-9d2d-6e2b074dd673"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.390636 4688 scope.go:117] "RemoveContainer" containerID="182ea98469db85aa8a01966f1324b4bb8689ad7ca051c346749d825e67c69a11" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.409183 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.451107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.451184 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v477x\" (UniqueName: \"kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.451213 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.451276 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.484031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c6cps" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.550716 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config" (OuterVolumeSpecName: "config") pod "480fb91d-2974-4178-9d2d-6e2b074dd673" (UID: "480fb91d-2974-4178-9d2d-6e2b074dd673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.554494 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.555932 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v477x\" (UniqueName: \"kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.555983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.556298 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.565413 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.577035 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.578799 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "480fb91d-2974-4178-9d2d-6e2b074dd673" (UID: "480fb91d-2974-4178-9d2d-6e2b074dd673"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.580542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "480fb91d-2974-4178-9d2d-6e2b074dd673" (UID: "480fb91d-2974-4178-9d2d-6e2b074dd673"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.629521 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v477x\" (UniqueName: \"kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x\") pod \"barbican-db-sync-fxx6x\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.659440 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.659500 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/480fb91d-2974-4178-9d2d-6e2b074dd673-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.729839 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fxx6x"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.729893 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.729915 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ptk4z"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.731303 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ptk4z"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.731423 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.738410 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cjvvn" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.738647 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.738832 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.770593 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.804480 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7f8lt"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.838274 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.850925 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-5rtbp"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.877810 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.877940 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvhs\" (UniqueName: \"kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.878107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.884682 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.979293 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.979403 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvhs\" (UniqueName: \"kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.979490 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.986511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:53 crc kubenswrapper[4688]: I0930 17:32:53.987423 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.009108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvhs\" (UniqueName: \"kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs\") pod \"neutron-db-sync-ptk4z\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.073781 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.189734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87885ffcc-xqwzc" event={"ID":"50afc300-49d0-4acc-bbf3-27ca00a1a965","Type":"ContainerStarted","Data":"91417fa57a7ba4a5f89d28ebb528066aaad4c302882d0d82bb62a76fd902de1b"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.194072 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c6cps"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.206910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerStarted","Data":"c625384c219144b930481564d27fe0aac72d7ba01290bf6994efd6cca17329f9"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.215943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerStarted","Data":"d1f2f839a1f0ad629e635a8384bb260a91b7fd243e30bce6e25535903ef652f1"} Sep 30 17:32:54 crc kubenswrapper[4688]: W0930 17:32:54.243508 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fc61503_23f4_4eb4_b3fa_e1825b84881f.slice/crio-6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807 WatchSource:0}: Error finding container 6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807: Status 404 returned error can't find the container with id 6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807 Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.243747 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" event={"ID":"2da37fae-0534-46b8-bd59-8d4a3c6fffd5","Type":"ContainerStarted","Data":"3f36b94c397dbb2d86221fe3df0a1571c40c41e8face95de238de6915fccc268"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.262349 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7f8lt" event={"ID":"22982660-e8a2-4e49-9b5d-ebc252fd1c8e","Type":"ContainerStarted","Data":"3c77c932bea4241e1fcf5156a9f275db1f17c2eaa202bd1436480e8fd9c33330"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.265681 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerStarted","Data":"b9510a01cbfa012449a26f05c5bd4efdcb3bbe95427ccbe70ac53060a8845003"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.282616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" event={"ID":"6cdc0750-2437-4608-aeb2-57c6d2ee76ef","Type":"ContainerStarted","Data":"d8b0e3644a2fdb62925a388cbf2a5e1f89fea89fc91ad22ddf9d77c4e84ff417"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.282947 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" podUID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" containerName="init" containerID="cri-o://d8b0e3644a2fdb62925a388cbf2a5e1f89fea89fc91ad22ddf9d77c4e84ff417" gracePeriod=10 Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.302841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2rqt" event={"ID":"15f67ad5-7444-4421-a361-44d9e90ec362","Type":"ContainerStarted","Data":"21d36dd1d8b7dc960ff6e36cff16e857613cc4ebf17f5259515ca5ccd141be8a"} Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.444578 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.674331 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.731507 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.799664 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.801633 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.852324 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.901737 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fxx6x"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.908672 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.908717 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlhc\" (UniqueName: \"kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.908743 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.908818 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.908893 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.931126 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:32:54 crc kubenswrapper[4688]: I0930 17:32:54.995937 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlhc\" (UniqueName: \"kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010499 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.010846 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.011849 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.012087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.015097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.020070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ptk4z"] Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.043763 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlhc\" (UniqueName: \"kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc\") pod \"horizon-548b754669-p8jmv\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.281375 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.342763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerStarted","Data":"35b645a3173f5ccd11538993ad5d6b6e0ce96139d8f784c30b9ed5ada9826d42"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.348358 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerStarted","Data":"3f2157ccf156226dad4e9e8855e8bfba47b737c6f8bfa78b54d6168efb7ec78d"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.359147 4688 generic.go:334] "Generic (PLEG): container finished" podID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerID="63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3" exitCode=0 Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.359476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" event={"ID":"2da37fae-0534-46b8-bd59-8d4a3c6fffd5","Type":"ContainerDied","Data":"63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.378846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fxx6x" event={"ID":"d1c36448-a984-423c-a9b6-6c4d210ca4b9","Type":"ContainerStarted","Data":"269fee16a8ebecb22df963e8c2cc9d1f8328ceca85a486a4d01b6e246be44e23"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.394774 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" containerID="d8b0e3644a2fdb62925a388cbf2a5e1f89fea89fc91ad22ddf9d77c4e84ff417" exitCode=0 Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.395161 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" event={"ID":"6cdc0750-2437-4608-aeb2-57c6d2ee76ef","Type":"ContainerDied","Data":"d8b0e3644a2fdb62925a388cbf2a5e1f89fea89fc91ad22ddf9d77c4e84ff417"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.475878 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p2rqt" podStartSLOduration=4.475857654 podStartE2EDuration="4.475857654s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:55.474009678 +0000 UTC m=+1032.769447236" watchObservedRunningTime="2025-09-30 17:32:55.475857654 +0000 UTC m=+1032.771295212" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.502257 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480fb91d-2974-4178-9d2d-6e2b074dd673" path="/var/lib/kubelet/pods/480fb91d-2974-4178-9d2d-6e2b074dd673/volumes" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.503093 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2rqt" event={"ID":"15f67ad5-7444-4421-a361-44d9e90ec362","Type":"ContainerStarted","Data":"1be03ea4fb3c14df395d5297792c422830943ccbd8436612a11a4a61f8845ebc"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.526199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptk4z" event={"ID":"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a","Type":"ContainerStarted","Data":"b578d3113999c6ccd9d2630d9956887f975fb286c0d47b5a80060ec22b69a78b"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.581860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c6cps" event={"ID":"8fc61503-23f4-4eb4-b3fa-e1825b84881f","Type":"ContainerStarted","Data":"6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807"} Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.869530 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.939352 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb\") pod \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.939429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config\") pod \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.939521 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb\") pod \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.939545 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c8zs\" (UniqueName: \"kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs\") pod \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.939614 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc\") pod \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\" (UID: \"6cdc0750-2437-4608-aeb2-57c6d2ee76ef\") " Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.964055 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs" (OuterVolumeSpecName: "kube-api-access-2c8zs") pod "6cdc0750-2437-4608-aeb2-57c6d2ee76ef" (UID: "6cdc0750-2437-4608-aeb2-57c6d2ee76ef"). InnerVolumeSpecName "kube-api-access-2c8zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:55 crc kubenswrapper[4688]: I0930 17:32:55.995284 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cdc0750-2437-4608-aeb2-57c6d2ee76ef" (UID: "6cdc0750-2437-4608-aeb2-57c6d2ee76ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.042178 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c8zs\" (UniqueName: \"kubernetes.io/projected/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-kube-api-access-2c8zs\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.042218 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.059254 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6cdc0750-2437-4608-aeb2-57c6d2ee76ef" (UID: "6cdc0750-2437-4608-aeb2-57c6d2ee76ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.101491 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6cdc0750-2437-4608-aeb2-57c6d2ee76ef" (UID: "6cdc0750-2437-4608-aeb2-57c6d2ee76ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.111389 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config" (OuterVolumeSpecName: "config") pod "6cdc0750-2437-4608-aeb2-57c6d2ee76ef" (UID: "6cdc0750-2437-4608-aeb2-57c6d2ee76ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.143967 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.144013 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.144026 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6cdc0750-2437-4608-aeb2-57c6d2ee76ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.325364 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:32:56 crc kubenswrapper[4688]: W0930 17:32:56.333351 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17b16462_38aa_4503_9f7a_283fa50254c7.slice/crio-338e8b54888f2535723e7f5c34d19c0d5dc625aeabb5cf02730c0785d3737413 WatchSource:0}: Error finding container 338e8b54888f2535723e7f5c34d19c0d5dc625aeabb5cf02730c0785d3737413: Status 404 returned error can't find the container with id 338e8b54888f2535723e7f5c34d19c0d5dc625aeabb5cf02730c0785d3737413 Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.602271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" event={"ID":"6cdc0750-2437-4608-aeb2-57c6d2ee76ef","Type":"ContainerDied","Data":"50a220ffad2391921d10cdb1b7eaed9724e51984558eb02bcb152659ad900d3c"} Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.602340 4688 scope.go:117] "RemoveContainer" containerID="d8b0e3644a2fdb62925a388cbf2a5e1f89fea89fc91ad22ddf9d77c4e84ff417" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.602343 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784f69c749-9vpsh" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.607250 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-548b754669-p8jmv" event={"ID":"17b16462-38aa-4503-9f7a-283fa50254c7","Type":"ContainerStarted","Data":"338e8b54888f2535723e7f5c34d19c0d5dc625aeabb5cf02730c0785d3737413"} Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.610227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptk4z" event={"ID":"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a","Type":"ContainerStarted","Data":"0c8d05f724e462c6d7719cedfae78f7036f1f5e151aca53dbb627e947f926512"} Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.636504 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerStarted","Data":"3b598478f6a68618df6d966f0dbc2c3af23e7071086cc8a69bfcbe0ac06a604d"} Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.641491 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" event={"ID":"2da37fae-0534-46b8-bd59-8d4a3c6fffd5","Type":"ContainerStarted","Data":"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786"} Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.695161 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ptk4z" podStartSLOduration=3.695132014 podStartE2EDuration="3.695132014s" podCreationTimestamp="2025-09-30 17:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:56.656000779 +0000 UTC m=+1033.951438337" watchObservedRunningTime="2025-09-30 17:32:56.695132014 +0000 UTC m=+1033.990569572" Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.893411 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:56 crc kubenswrapper[4688]: I0930 17:32:56.908258 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784f69c749-9vpsh"] Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.446486 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" path="/var/lib/kubelet/pods/6cdc0750-2437-4608-aeb2-57c6d2ee76ef/volumes" Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.705399 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerStarted","Data":"e5ed35f9ac424f03f2e12fe4baf9f9f77cec4ecce6c6f98c61799013a0ade375"} Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.705664 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-log" containerID="cri-o://35b645a3173f5ccd11538993ad5d6b6e0ce96139d8f784c30b9ed5ada9826d42" gracePeriod=30 Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.706381 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-httpd" containerID="cri-o://e5ed35f9ac424f03f2e12fe4baf9f9f77cec4ecce6c6f98c61799013a0ade375" gracePeriod=30 Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.725026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.744955 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.744922156 podStartE2EDuration="6.744922156s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:57.744038824 +0000 UTC m=+1035.039476392" watchObservedRunningTime="2025-09-30 17:32:57.744922156 +0000 UTC m=+1035.040359714" Sep 30 17:32:57 crc kubenswrapper[4688]: I0930 17:32:57.790330 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" podStartSLOduration=6.790303939 podStartE2EDuration="6.790303939s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:57.784132203 +0000 UTC m=+1035.079569771" watchObservedRunningTime="2025-09-30 17:32:57.790303939 +0000 UTC m=+1035.085741497" Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.750590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerStarted","Data":"e023b4131087c610e8e3a33a489d6d6f35fc4088c5fbe648bd1a8aac141244a4"} Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.751425 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-log" containerID="cri-o://3b598478f6a68618df6d966f0dbc2c3af23e7071086cc8a69bfcbe0ac06a604d" gracePeriod=30 Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.751781 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-httpd" containerID="cri-o://e023b4131087c610e8e3a33a489d6d6f35fc4088c5fbe648bd1a8aac141244a4" gracePeriod=30 Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.811653 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.811629256 podStartE2EDuration="7.811629256s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:32:58.789354495 +0000 UTC m=+1036.084792053" watchObservedRunningTime="2025-09-30 17:32:58.811629256 +0000 UTC m=+1036.107066814" Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.840351 4688 generic.go:334] "Generic (PLEG): container finished" podID="841eada3-3904-429c-b6c0-f1238cc8e882" containerID="e5ed35f9ac424f03f2e12fe4baf9f9f77cec4ecce6c6f98c61799013a0ade375" exitCode=0 Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.840409 4688 generic.go:334] "Generic (PLEG): container finished" podID="841eada3-3904-429c-b6c0-f1238cc8e882" containerID="35b645a3173f5ccd11538993ad5d6b6e0ce96139d8f784c30b9ed5ada9826d42" exitCode=143 Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.841688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerDied","Data":"e5ed35f9ac424f03f2e12fe4baf9f9f77cec4ecce6c6f98c61799013a0ade375"} Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.841740 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerDied","Data":"35b645a3173f5ccd11538993ad5d6b6e0ce96139d8f784c30b9ed5ada9826d42"} Sep 30 17:32:58 crc kubenswrapper[4688]: I0930 17:32:58.969936 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.069791 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.069875 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.069931 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070001 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070042 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070123 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s8fv\" (UniqueName: \"kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070234 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070379 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data\") pod \"841eada3-3904-429c-b6c0-f1238cc8e882\" (UID: \"841eada3-3904-429c-b6c0-f1238cc8e882\") " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.070403 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs" (OuterVolumeSpecName: "logs") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.071056 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.072401 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.077303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv" (OuterVolumeSpecName: "kube-api-access-9s8fv") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "kube-api-access-9s8fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.081110 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts" (OuterVolumeSpecName: "scripts") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.094717 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.104335 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.159382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data" (OuterVolumeSpecName: "config-data") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174436 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174503 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174686 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174716 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/841eada3-3904-429c-b6c0-f1238cc8e882-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174728 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.174739 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s8fv\" (UniqueName: \"kubernetes.io/projected/841eada3-3904-429c-b6c0-f1238cc8e882-kube-api-access-9s8fv\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.179279 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "841eada3-3904-429c-b6c0-f1238cc8e882" (UID: "841eada3-3904-429c-b6c0-f1238cc8e882"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.202363 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.345651 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841eada3-3904-429c-b6c0-f1238cc8e882-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:32:59 crc kubenswrapper[4688]: I0930 17:32:59.354606 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.862931 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"841eada3-3904-429c-b6c0-f1238cc8e882","Type":"ContainerDied","Data":"c625384c219144b930481564d27fe0aac72d7ba01290bf6994efd6cca17329f9"} Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.863320 4688 scope.go:117] "RemoveContainer" containerID="e5ed35f9ac424f03f2e12fe4baf9f9f77cec4ecce6c6f98c61799013a0ade375" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.863487 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.887904 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerID="e023b4131087c610e8e3a33a489d6d6f35fc4088c5fbe648bd1a8aac141244a4" exitCode=0 Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.887944 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerID="3b598478f6a68618df6d966f0dbc2c3af23e7071086cc8a69bfcbe0ac06a604d" exitCode=143 Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.887972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerDied","Data":"e023b4131087c610e8e3a33a489d6d6f35fc4088c5fbe648bd1a8aac141244a4"} Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:32:59.888011 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerDied","Data":"3b598478f6a68618df6d966f0dbc2c3af23e7071086cc8a69bfcbe0ac06a604d"} Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.002244 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.027842 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.042948 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.072972 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:00.073513 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073528 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:00.073537 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073543 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:00.073563 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073603 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:00.073622 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" containerName="init" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073627 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" containerName="init" Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:00.073646 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073655 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073832 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073867 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073883 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-httpd" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073896 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" containerName="glance-log" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.073916 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdc0750-2437-4608-aeb2-57c6d2ee76ef" containerName="init" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.075030 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.079185 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081117 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sz7m\" (UniqueName: \"kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081257 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081288 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081314 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081333 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081431 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081464 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.081485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts\") pod \"a7a76007-a70a-4246-ad40-a0e78b4fe699\" (UID: \"a7a76007-a70a-4246-ad40-a0e78b4fe699\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.083147 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs" (OuterVolumeSpecName: "logs") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.084866 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.086360 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.091136 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m" (OuterVolumeSpecName: "kube-api-access-7sz7m") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "kube-api-access-7sz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.093357 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.094561 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.103034 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts" (OuterVolumeSpecName: "scripts") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.159735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data" (OuterVolumeSpecName: "config-data") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.172920 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.183721 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.183802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.183840 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.183883 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jv9l\" (UniqueName: \"kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.183979 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184019 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184176 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184191 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184219 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184231 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184241 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a76007-a70a-4246-ad40-a0e78b4fe699-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184250 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.184268 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sz7m\" (UniqueName: \"kubernetes.io/projected/a7a76007-a70a-4246-ad40-a0e78b4fe699-kube-api-access-7sz7m\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.223385 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.235188 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a7a76007-a70a-4246-ad40-a0e78b4fe699" (UID: "a7a76007-a70a-4246-ad40-a0e78b4fe699"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jv9l\" (UniqueName: \"kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.287999 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.288025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.288084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.288199 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.288215 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a76007-a70a-4246-ad40-a0e78b4fe699-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.290760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.291177 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.291273 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.292756 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.296131 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.308026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.312110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.312874 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jv9l\" (UniqueName: \"kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.336605 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.494762 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.898737 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.931785 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a7a76007-a70a-4246-ad40-a0e78b4fe699","Type":"ContainerDied","Data":"3f2157ccf156226dad4e9e8855e8bfba47b737c6f8bfa78b54d6168efb7ec78d"} Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.931929 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.973635 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.975911 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:00.983212 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.004638 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017236 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfn8\" (UniqueName: \"kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017308 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017426 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017461 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.017511 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.025253 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.118318 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119363 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119385 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119448 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfn8\" (UniqueName: \"kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.119548 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.121818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.129864 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.130987 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.134836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.140283 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.150324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.160047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.166871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfn8\" (UniqueName: \"kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8\") pod \"horizon-6dd7b7d6d-nfbnw\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.196665 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.198386 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.207517 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.208207 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.257867 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.316594 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.335176 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346602 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346771 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346847 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v2lq\" (UniqueName: \"kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.346914 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.347307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.347373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: E0930 17:33:01.365545 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance httpd-run internal-tls-certs kube-api-access-6v2lq logs scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="37442aec-b83f-44ab-9157-bd5f62456bf5" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.369157 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.421046 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dddb98558-nts96"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.423100 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.437022 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dddb98558-nts96"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450122 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450179 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450205 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450230 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v2lq\" (UniqueName: \"kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450254 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450308 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.450341 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.451962 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.453438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.453868 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.469713 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.470266 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.474774 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841eada3-3904-429c-b6c0-f1238cc8e882" path="/var/lib/kubelet/pods/841eada3-3904-429c-b6c0-f1238cc8e882/volumes" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.476091 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a76007-a70a-4246-ad40-a0e78b4fe699" path="/var/lib/kubelet/pods/a7a76007-a70a-4246-ad40-a0e78b4fe699/volumes" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.476671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.488680 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.500295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v2lq\" (UniqueName: \"kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.528917 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554727 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-tls-certs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554777 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-config-data\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-logs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-secret-key\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554880 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbq6\" (UniqueName: \"kubernetes.io/projected/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-kube-api-access-vlbq6\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.554952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-combined-ca-bundle\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.555028 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-scripts\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676181 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-scripts\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676294 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-tls-certs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676314 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-config-data\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-logs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-secret-key\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676389 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbq6\" (UniqueName: \"kubernetes.io/projected/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-kube-api-access-vlbq6\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.676436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-combined-ca-bundle\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.677457 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-scripts\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.677514 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-logs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.679358 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-config-data\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.684871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-secret-key\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.686869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-horizon-tls-certs\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.689791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-combined-ca-bundle\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.694992 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbq6\" (UniqueName: \"kubernetes.io/projected/c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c-kube-api-access-vlbq6\") pod \"horizon-5dddb98558-nts96\" (UID: \"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c\") " pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.760539 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.945107 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:01.967060 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.084774 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.084895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085064 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v2lq\" (UniqueName: \"kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085122 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085165 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085230 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.085304 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs\") pod \"37442aec-b83f-44ab-9157-bd5f62456bf5\" (UID: \"37442aec-b83f-44ab-9157-bd5f62456bf5\") " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.086197 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs" (OuterVolumeSpecName: "logs") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.086263 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.092353 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.092405 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts" (OuterVolumeSpecName: "scripts") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.092392 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.092614 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data" (OuterVolumeSpecName: "config-data") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.102910 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.102952 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq" (OuterVolumeSpecName: "kube-api-access-6v2lq") pod "37442aec-b83f-44ab-9157-bd5f62456bf5" (UID: "37442aec-b83f-44ab-9157-bd5f62456bf5"). InnerVolumeSpecName "kube-api-access-6v2lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.187991 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188048 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188088 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188116 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188126 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188136 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37442aec-b83f-44ab-9157-bd5f62456bf5-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188146 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v2lq\" (UniqueName: \"kubernetes.io/projected/37442aec-b83f-44ab-9157-bd5f62456bf5-kube-api-access-6v2lq\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.188155 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37442aec-b83f-44ab-9157-bd5f62456bf5-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.208559 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.290637 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.474817 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.552096 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.552942 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-rclzn" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" containerID="cri-o://09528c74c0d13333ec0fad99d4b54d8c9cdda2eca0cc49b1baf72f01742b5e98" gracePeriod=10 Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.961774 4688 generic.go:334] "Generic (PLEG): container finished" podID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerID="09528c74c0d13333ec0fad99d4b54d8c9cdda2eca0cc49b1baf72f01742b5e98" exitCode=0 Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.962261 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:02.961906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rclzn" event={"ID":"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce","Type":"ContainerDied","Data":"09528c74c0d13333ec0fad99d4b54d8c9cdda2eca0cc49b1baf72f01742b5e98"} Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.015406 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.024817 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.035796 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.038470 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.042644 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.044346 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.050823 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.218591 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.218686 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.218726 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.218753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.219195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.219294 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.219494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.219542 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drc2\" (UniqueName: \"kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.255545 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-rclzn" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322016 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322149 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drc2\" (UniqueName: \"kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322477 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.322879 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.323771 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.327411 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.329525 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.331185 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.338166 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.344689 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drc2\" (UniqueName: \"kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.348625 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.376398 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.482429 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37442aec-b83f-44ab-9157-bd5f62456bf5" path="/var/lib/kubelet/pods/37442aec-b83f-44ab-9157-bd5f62456bf5/volumes" Sep 30 17:33:03 crc kubenswrapper[4688]: I0930 17:33:03.661073 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:06 crc kubenswrapper[4688]: I0930 17:33:06.000998 4688 generic.go:334] "Generic (PLEG): container finished" podID="15f67ad5-7444-4421-a361-44d9e90ec362" containerID="1be03ea4fb3c14df395d5297792c422830943ccbd8436612a11a4a61f8845ebc" exitCode=0 Sep 30 17:33:06 crc kubenswrapper[4688]: I0930 17:33:06.001395 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2rqt" event={"ID":"15f67ad5-7444-4421-a361-44d9e90ec362","Type":"ContainerDied","Data":"1be03ea4fb3c14df395d5297792c422830943ccbd8436612a11a4a61f8845ebc"} Sep 30 17:33:13 crc kubenswrapper[4688]: I0930 17:33:13.252491 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-rclzn" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.492093 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.492748 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbh679h675h549h545h58dh99h68bh8fh645h664h7dhf7h57fh5bchc5h64dh5c4h64ch668h5d6h564h59bh558hffhbbh5bbh577h698h669h68h64fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgkk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-87885ffcc-xqwzc_openstack(50afc300-49d0-4acc-bbf3-27ca00a1a965): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.496415 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-87885ffcc-xqwzc" podUID="50afc300-49d0-4acc-bbf3-27ca00a1a965" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.506703 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.507334 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc7h599hb8h59bh565h5fch5bbh574h5d5h656hd7h5b8h56bh5cch58ch676h5f6h55fhdbhc5h5f8h557h84h98h99hd6h67dh6h5f8h54dh545h55q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hlhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-548b754669-p8jmv_openstack(17b16462-38aa-4503-9f7a-283fa50254c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:33:16 crc kubenswrapper[4688]: E0930 17:33:16.510002 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-548b754669-p8jmv" podUID="17b16462-38aa-4503-9f7a-283fa50254c7" Sep 30 17:33:17 crc kubenswrapper[4688]: E0930 17:33:17.894418 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Sep 30 17:33:17 crc kubenswrapper[4688]: E0930 17:33:17.894988 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjccw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7f8lt_openstack(22982660-e8a2-4e49-9b5d-ebc252fd1c8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:33:17 crc kubenswrapper[4688]: E0930 17:33:17.896382 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7f8lt" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" Sep 30 17:33:17 crc kubenswrapper[4688]: I0930 17:33:17.912240 4688 scope.go:117] "RemoveContainer" containerID="35b645a3173f5ccd11538993ad5d6b6e0ce96139d8f784c30b9ed5ada9826d42" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.034262 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.041764 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.132777 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2rqt" event={"ID":"15f67ad5-7444-4421-a361-44d9e90ec362","Type":"ContainerDied","Data":"21d36dd1d8b7dc960ff6e36cff16e857613cc4ebf17f5259515ca5ccd141be8a"} Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.132826 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d36dd1d8b7dc960ff6e36cff16e857613cc4ebf17f5259515ca5ccd141be8a" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.132890 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2rqt" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.139035 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.139262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-rclzn" event={"ID":"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce","Type":"ContainerDied","Data":"459160ec077f31c96b1efc17932da872958ed3686c8274eb35fb4d60ebdaf6f6"} Sep 30 17:33:18 crc kubenswrapper[4688]: E0930 17:33:18.140848 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-7f8lt" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201159 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config\") pod \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201217 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201383 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwxt8\" (UniqueName: \"kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8\") pod \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201527 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb\") pod \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.201556 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb\") pod \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.204883 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzct5\" (UniqueName: \"kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.205083 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc\") pod \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\" (UID: \"e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.205119 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.205162 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle\") pod \"15f67ad5-7444-4421-a361-44d9e90ec362\" (UID: \"15f67ad5-7444-4421-a361-44d9e90ec362\") " Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.210168 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.211108 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8" (OuterVolumeSpecName: "kube-api-access-vwxt8") pod "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" (UID: "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce"). InnerVolumeSpecName "kube-api-access-vwxt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.212495 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5" (OuterVolumeSpecName: "kube-api-access-dzct5") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "kube-api-access-dzct5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.212630 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts" (OuterVolumeSpecName: "scripts") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.234781 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.245114 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.254052 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-rclzn" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.254179 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-rclzn" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.266234 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config" (OuterVolumeSpecName: "config") pod "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" (UID: "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.268542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" (UID: "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.271554 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data" (OuterVolumeSpecName: "config-data") pod "15f67ad5-7444-4421-a361-44d9e90ec362" (UID: "15f67ad5-7444-4421-a361-44d9e90ec362"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.276065 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" (UID: "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.280061 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" (UID: "e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308281 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308336 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwxt8\" (UniqueName: \"kubernetes.io/projected/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-kube-api-access-vwxt8\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308544 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308806 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308837 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzct5\" (UniqueName: \"kubernetes.io/projected/15f67ad5-7444-4421-a361-44d9e90ec362-kube-api-access-dzct5\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308855 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308870 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308886 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308900 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308915 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.308928 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15f67ad5-7444-4421-a361-44d9e90ec362-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.475245 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:33:18 crc kubenswrapper[4688]: I0930 17:33:18.482364 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-rclzn"] Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.158887 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p2rqt"] Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.174937 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p2rqt"] Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.248468 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kd4xf"] Sep 30 17:33:19 crc kubenswrapper[4688]: E0930 17:33:19.248974 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.248998 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" Sep 30 17:33:19 crc kubenswrapper[4688]: E0930 17:33:19.249025 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f67ad5-7444-4421-a361-44d9e90ec362" containerName="keystone-bootstrap" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.249036 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f67ad5-7444-4421-a361-44d9e90ec362" containerName="keystone-bootstrap" Sep 30 17:33:19 crc kubenswrapper[4688]: E0930 17:33:19.249046 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="init" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.249054 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="init" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.249290 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" containerName="dnsmasq-dns" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.249386 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f67ad5-7444-4421-a361-44d9e90ec362" containerName="keystone-bootstrap" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.250063 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.252524 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.252745 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.252905 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jhgvn" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.253217 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.258962 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kd4xf"] Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341123 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341487 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341583 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.341614 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q7v\" (UniqueName: \"kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.441729 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f67ad5-7444-4421-a361-44d9e90ec362" path="/var/lib/kubelet/pods/15f67ad5-7444-4421-a361-44d9e90ec362/volumes" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.442549 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce" path="/var/lib/kubelet/pods/e98bf2fb-356a-4e92-8128-cf0d5c8fd1ce/volumes" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.444767 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q7v\" (UniqueName: \"kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.444911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.444972 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.445010 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.445126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.445168 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.450545 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.453042 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.455724 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.456905 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.461400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.463728 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q7v\" (UniqueName: \"kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v\") pod \"keystone-bootstrap-kd4xf\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:19 crc kubenswrapper[4688]: I0930 17:33:19.608686 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:29 crc kubenswrapper[4688]: E0930 17:33:29.661263 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Sep 30 17:33:29 crc kubenswrapper[4688]: E0930 17:33:29.662305 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v477x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fxx6x_openstack(d1c36448-a984-423c-a9b6-6c4d210ca4b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:33:29 crc kubenswrapper[4688]: E0930 17:33:29.663793 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fxx6x" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.768182 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.779029 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.879133 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts\") pod \"17b16462-38aa-4503-9f7a-283fa50254c7\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.879718 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data\") pod \"17b16462-38aa-4503-9f7a-283fa50254c7\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.879795 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts" (OuterVolumeSpecName: "scripts") pod "17b16462-38aa-4503-9f7a-283fa50254c7" (UID: "17b16462-38aa-4503-9f7a-283fa50254c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.879894 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlhc\" (UniqueName: \"kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc\") pod \"17b16462-38aa-4503-9f7a-283fa50254c7\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.880062 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs\") pod \"17b16462-38aa-4503-9f7a-283fa50254c7\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.880172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data" (OuterVolumeSpecName: "config-data") pod "17b16462-38aa-4503-9f7a-283fa50254c7" (UID: "17b16462-38aa-4503-9f7a-283fa50254c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.880323 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key\") pod \"17b16462-38aa-4503-9f7a-283fa50254c7\" (UID: \"17b16462-38aa-4503-9f7a-283fa50254c7\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.880445 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs" (OuterVolumeSpecName: "logs") pod "17b16462-38aa-4503-9f7a-283fa50254c7" (UID: "17b16462-38aa-4503-9f7a-283fa50254c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.880987 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.881011 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17b16462-38aa-4503-9f7a-283fa50254c7-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.881021 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17b16462-38aa-4503-9f7a-283fa50254c7-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.886072 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc" (OuterVolumeSpecName: "kube-api-access-7hlhc") pod "17b16462-38aa-4503-9f7a-283fa50254c7" (UID: "17b16462-38aa-4503-9f7a-283fa50254c7"). InnerVolumeSpecName "kube-api-access-7hlhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.895631 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "17b16462-38aa-4503-9f7a-283fa50254c7" (UID: "17b16462-38aa-4503-9f7a-283fa50254c7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs\") pod \"50afc300-49d0-4acc-bbf3-27ca00a1a965\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982288 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data\") pod \"50afc300-49d0-4acc-bbf3-27ca00a1a965\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts\") pod \"50afc300-49d0-4acc-bbf3-27ca00a1a965\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982420 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgkk7\" (UniqueName: \"kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7\") pod \"50afc300-49d0-4acc-bbf3-27ca00a1a965\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982500 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key\") pod \"50afc300-49d0-4acc-bbf3-27ca00a1a965\" (UID: \"50afc300-49d0-4acc-bbf3-27ca00a1a965\") " Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982883 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hlhc\" (UniqueName: \"kubernetes.io/projected/17b16462-38aa-4503-9f7a-283fa50254c7-kube-api-access-7hlhc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.982899 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17b16462-38aa-4503-9f7a-283fa50254c7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.983051 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs" (OuterVolumeSpecName: "logs") pod "50afc300-49d0-4acc-bbf3-27ca00a1a965" (UID: "50afc300-49d0-4acc-bbf3-27ca00a1a965"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.983114 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data" (OuterVolumeSpecName: "config-data") pod "50afc300-49d0-4acc-bbf3-27ca00a1a965" (UID: "50afc300-49d0-4acc-bbf3-27ca00a1a965"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.983724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts" (OuterVolumeSpecName: "scripts") pod "50afc300-49d0-4acc-bbf3-27ca00a1a965" (UID: "50afc300-49d0-4acc-bbf3-27ca00a1a965"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.991788 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "50afc300-49d0-4acc-bbf3-27ca00a1a965" (UID: "50afc300-49d0-4acc-bbf3-27ca00a1a965"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:29 crc kubenswrapper[4688]: I0930 17:33:29.991839 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7" (OuterVolumeSpecName: "kube-api-access-dgkk7") pod "50afc300-49d0-4acc-bbf3-27ca00a1a965" (UID: "50afc300-49d0-4acc-bbf3-27ca00a1a965"). InnerVolumeSpecName "kube-api-access-dgkk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.084599 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50afc300-49d0-4acc-bbf3-27ca00a1a965-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.084637 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50afc300-49d0-4acc-bbf3-27ca00a1a965-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.084647 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.084659 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50afc300-49d0-4acc-bbf3-27ca00a1a965-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.084668 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgkk7\" (UniqueName: \"kubernetes.io/projected/50afc300-49d0-4acc-bbf3-27ca00a1a965-kube-api-access-dgkk7\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.091070 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dddb98558-nts96"] Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.306149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-87885ffcc-xqwzc" event={"ID":"50afc300-49d0-4acc-bbf3-27ca00a1a965","Type":"ContainerDied","Data":"91417fa57a7ba4a5f89d28ebb528066aaad4c302882d0d82bb62a76fd902de1b"} Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.306176 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-87885ffcc-xqwzc" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.309636 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548b754669-p8jmv" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.309645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-548b754669-p8jmv" event={"ID":"17b16462-38aa-4503-9f7a-283fa50254c7","Type":"ContainerDied","Data":"338e8b54888f2535723e7f5c34d19c0d5dc625aeabb5cf02730c0785d3737413"} Sep 30 17:33:30 crc kubenswrapper[4688]: E0930 17:33:30.312653 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-fxx6x" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.431313 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.448657 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-87885ffcc-xqwzc"] Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.464312 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.470853 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-548b754669-p8jmv"] Sep 30 17:33:30 crc kubenswrapper[4688]: W0930 17:33:30.840656 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5430a6e_5e2c_40e5_8b0f_0b0bb4e5db5c.slice/crio-fcd8ff174a8a536b24351984adbdfef1ff27f0bdc3add12d4e2b0704940e3250 WatchSource:0}: Error finding container fcd8ff174a8a536b24351984adbdfef1ff27f0bdc3add12d4e2b0704940e3250: Status 404 returned error can't find the container with id fcd8ff174a8a536b24351984adbdfef1ff27f0bdc3add12d4e2b0704940e3250 Sep 30 17:33:30 crc kubenswrapper[4688]: I0930 17:33:30.847533 4688 scope.go:117] "RemoveContainer" containerID="e023b4131087c610e8e3a33a489d6d6f35fc4088c5fbe648bd1a8aac141244a4" Sep 30 17:33:30 crc kubenswrapper[4688]: E0930 17:33:30.888356 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Sep 30 17:33:30 crc kubenswrapper[4688]: E0930 17:33:30.888632 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4rwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-c6cps_openstack(8fc61503-23f4-4eb4-b3fa-e1825b84881f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 17:33:30 crc kubenswrapper[4688]: E0930 17:33:30.889840 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-c6cps" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.034513 4688 scope.go:117] "RemoveContainer" containerID="3b598478f6a68618df6d966f0dbc2c3af23e7071086cc8a69bfcbe0ac06a604d" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.084909 4688 scope.go:117] "RemoveContainer" containerID="09528c74c0d13333ec0fad99d4b54d8c9cdda2eca0cc49b1baf72f01742b5e98" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.129613 4688 scope.go:117] "RemoveContainer" containerID="ee3ad7acb1b2623d7bf727ee3ac1f55f5e42d84d385db1e53d518f5ce1815c71" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.329516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerStarted","Data":"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46"} Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.333377 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dddb98558-nts96" event={"ID":"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c","Type":"ContainerStarted","Data":"d3553f5acba0d8745357ee52fe4019f0aaae017a6352158ec287e19db3849773"} Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.333430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dddb98558-nts96" event={"ID":"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c","Type":"ContainerStarted","Data":"fcd8ff174a8a536b24351984adbdfef1ff27f0bdc3add12d4e2b0704940e3250"} Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.344229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerStarted","Data":"17cd8706cdd698803d31c2686d2428856f2f3ad65c1431969d00d1e86cdcc8d2"} Sep 30 17:33:31 crc kubenswrapper[4688]: E0930 17:33:31.349397 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-c6cps" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.446976 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b16462-38aa-4503-9f7a-283fa50254c7" path="/var/lib/kubelet/pods/17b16462-38aa-4503-9f7a-283fa50254c7/volumes" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.447448 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50afc300-49d0-4acc-bbf3-27ca00a1a965" path="/var/lib/kubelet/pods/50afc300-49d0-4acc-bbf3-27ca00a1a965/volumes" Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.462425 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.476805 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:31 crc kubenswrapper[4688]: W0930 17:33:31.484167 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934e6a06_9f81_40c5_b8aa_464841bb5312.slice/crio-1439d1c3e2ccfdc72e08c312294140176449071f156a77286f33b780ff73c25e WatchSource:0}: Error finding container 1439d1c3e2ccfdc72e08c312294140176449071f156a77286f33b780ff73c25e: Status 404 returned error can't find the container with id 1439d1c3e2ccfdc72e08c312294140176449071f156a77286f33b780ff73c25e Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.552465 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:33:31 crc kubenswrapper[4688]: W0930 17:33:31.578683 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45dadd69_d1a4_41fe_8e93_81a835ce2501.slice/crio-f8fabf17368c4650adcb98abf0b3400efecd1d6101b8c1370b70e61227f6dbc8 WatchSource:0}: Error finding container f8fabf17368c4650adcb98abf0b3400efecd1d6101b8c1370b70e61227f6dbc8: Status 404 returned error can't find the container with id f8fabf17368c4650adcb98abf0b3400efecd1d6101b8c1370b70e61227f6dbc8 Sep 30 17:33:31 crc kubenswrapper[4688]: I0930 17:33:31.596728 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kd4xf"] Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.360972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerStarted","Data":"f251b59c0920d79e3e009e427d10d16fb7d2d9bb829eea98102072c8dd09a5d4"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.361322 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5668f89c89-nk5pt" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon-log" containerID="cri-o://17cd8706cdd698803d31c2686d2428856f2f3ad65c1431969d00d1e86cdcc8d2" gracePeriod=30 Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.361587 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5668f89c89-nk5pt" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon" containerID="cri-o://f251b59c0920d79e3e009e427d10d16fb7d2d9bb829eea98102072c8dd09a5d4" gracePeriod=30 Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.364958 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerStarted","Data":"c562d4f1116921cb50cb3624f74450cec6bc022a8a664ef6d2b64eda418b69f1"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.364998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerStarted","Data":"1439d1c3e2ccfdc72e08c312294140176449071f156a77286f33b780ff73c25e"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.372019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerStarted","Data":"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.372075 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerStarted","Data":"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.372086 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerStarted","Data":"29384fed607cf520fc9adf3785a83de87cae189fd175a2801f568c255ad05475"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.375296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dddb98558-nts96" event={"ID":"c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c","Type":"ContainerStarted","Data":"76cbecd42ba5c10f0597f0481d18d2593071073e9295d161162a6263cfb2852d"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.378672 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerStarted","Data":"f8fabf17368c4650adcb98abf0b3400efecd1d6101b8c1370b70e61227f6dbc8"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.382213 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5668f89c89-nk5pt" podStartSLOduration=5.206824773 podStartE2EDuration="41.382192801s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="2025-09-30 17:32:53.485471336 +0000 UTC m=+1030.780908904" lastFinishedPulling="2025-09-30 17:33:29.660839374 +0000 UTC m=+1066.956276932" observedRunningTime="2025-09-30 17:33:32.380728585 +0000 UTC m=+1069.676166143" watchObservedRunningTime="2025-09-30 17:33:32.382192801 +0000 UTC m=+1069.677630359" Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.385251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd4xf" event={"ID":"74e0dade-3e8d-4c68-b992-e672f563e545","Type":"ContainerStarted","Data":"b5746ffe34dd04272d060188c687ada301704724a0b355fcc9a5ce5be5ead2cc"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.385306 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd4xf" event={"ID":"74e0dade-3e8d-4c68-b992-e672f563e545","Type":"ContainerStarted","Data":"ca7cc27fb60885886d4b8f7ce8ecb27b6ac63837af8d3cae15dcbb5b119e0d2f"} Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.414304 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5dddb98558-nts96" podStartSLOduration=31.414280759 podStartE2EDuration="31.414280759s" podCreationTimestamp="2025-09-30 17:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:32.408263138 +0000 UTC m=+1069.703700716" watchObservedRunningTime="2025-09-30 17:33:32.414280759 +0000 UTC m=+1069.709718317" Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.437986 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6dd7b7d6d-nfbnw" podStartSLOduration=32.437964516 podStartE2EDuration="32.437964516s" podCreationTimestamp="2025-09-30 17:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:32.428177569 +0000 UTC m=+1069.723615137" watchObservedRunningTime="2025-09-30 17:33:32.437964516 +0000 UTC m=+1069.733402074" Sep 30 17:33:32 crc kubenswrapper[4688]: I0930 17:33:32.464799 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kd4xf" podStartSLOduration=13.46477315 podStartE2EDuration="13.46477315s" podCreationTimestamp="2025-09-30 17:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:32.452393719 +0000 UTC m=+1069.747831277" watchObservedRunningTime="2025-09-30 17:33:32.46477315 +0000 UTC m=+1069.760210708" Sep 30 17:33:33 crc kubenswrapper[4688]: I0930 17:33:33.416913 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerStarted","Data":"62934cb518d898fe7bb2900891a3a2743470ba347554470984cbf411ed3e0443"} Sep 30 17:33:33 crc kubenswrapper[4688]: I0930 17:33:33.427675 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerStarted","Data":"f20886781402bdc1a6e1f1fd09b6dd4974b4bfb05dbd29c4aa89e7d1a33da05d"} Sep 30 17:33:33 crc kubenswrapper[4688]: I0930 17:33:33.440979 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-log" containerID="cri-o://c562d4f1116921cb50cb3624f74450cec6bc022a8a664ef6d2b64eda418b69f1" gracePeriod=30 Sep 30 17:33:33 crc kubenswrapper[4688]: I0930 17:33:33.441182 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-httpd" containerID="cri-o://f20886781402bdc1a6e1f1fd09b6dd4974b4bfb05dbd29c4aa89e7d1a33da05d" gracePeriod=30 Sep 30 17:33:33 crc kubenswrapper[4688]: I0930 17:33:33.542246 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=33.542123797 podStartE2EDuration="33.542123797s" podCreationTimestamp="2025-09-30 17:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:33.534517195 +0000 UTC m=+1070.829954763" watchObservedRunningTime="2025-09-30 17:33:33.542123797 +0000 UTC m=+1070.837561355" Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.439453 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7f8lt" event={"ID":"22982660-e8a2-4e49-9b5d-ebc252fd1c8e","Type":"ContainerStarted","Data":"090587b96d5ef62200849cd8b892601a761cb9b887bc68f789f3886110d2e3a0"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.442993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerStarted","Data":"6d21aaaa28eb918312e391e7a5819cd1bc060be08e1f0d91b17b81b40ac19d2b"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.446742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerStarted","Data":"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.452171 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" containerID="0c8d05f724e462c6d7719cedfae78f7036f1f5e151aca53dbb627e947f926512" exitCode=0 Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.452258 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptk4z" event={"ID":"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a","Type":"ContainerDied","Data":"0c8d05f724e462c6d7719cedfae78f7036f1f5e151aca53dbb627e947f926512"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.455976 4688 generic.go:334] "Generic (PLEG): container finished" podID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerID="f20886781402bdc1a6e1f1fd09b6dd4974b4bfb05dbd29c4aa89e7d1a33da05d" exitCode=0 Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.455998 4688 generic.go:334] "Generic (PLEG): container finished" podID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerID="c562d4f1116921cb50cb3624f74450cec6bc022a8a664ef6d2b64eda418b69f1" exitCode=143 Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.456019 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerDied","Data":"f20886781402bdc1a6e1f1fd09b6dd4974b4bfb05dbd29c4aa89e7d1a33da05d"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.456038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerDied","Data":"c562d4f1116921cb50cb3624f74450cec6bc022a8a664ef6d2b64eda418b69f1"} Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.462442 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7f8lt" podStartSLOduration=4.077490828 podStartE2EDuration="43.462415181s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="2025-09-30 17:32:53.871181234 +0000 UTC m=+1031.166618792" lastFinishedPulling="2025-09-30 17:33:33.256105587 +0000 UTC m=+1070.551543145" observedRunningTime="2025-09-30 17:33:34.459024285 +0000 UTC m=+1071.754461843" watchObservedRunningTime="2025-09-30 17:33:34.462415181 +0000 UTC m=+1071.757852739" Sep 30 17:33:34 crc kubenswrapper[4688]: I0930 17:33:34.538825 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.538799243 podStartE2EDuration="31.538799243s" podCreationTimestamp="2025-09-30 17:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:34.52398163 +0000 UTC m=+1071.819419188" watchObservedRunningTime="2025-09-30 17:33:34.538799243 +0000 UTC m=+1071.834236801" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.031668 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220547 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220624 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220652 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jv9l\" (UniqueName: \"kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220740 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.220789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"934e6a06-9f81-40c5-b8aa-464841bb5312\" (UID: \"934e6a06-9f81-40c5-b8aa-464841bb5312\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.224973 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.224991 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs" (OuterVolumeSpecName: "logs") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.230854 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts" (OuterVolumeSpecName: "scripts") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.231040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l" (OuterVolumeSpecName: "kube-api-access-2jv9l") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "kube-api-access-2jv9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.231079 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.251015 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.274212 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data" (OuterVolumeSpecName: "config-data") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.276826 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "934e6a06-9f81-40c5-b8aa-464841bb5312" (UID: "934e6a06-9f81-40c5-b8aa-464841bb5312"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323732 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323774 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323786 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e6a06-9f81-40c5-b8aa-464841bb5312-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323796 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323805 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323818 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jv9l\" (UniqueName: \"kubernetes.io/projected/934e6a06-9f81-40c5-b8aa-464841bb5312-kube-api-access-2jv9l\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323829 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.323838 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e6a06-9f81-40c5-b8aa-464841bb5312-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.344321 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.426090 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.470777 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"934e6a06-9f81-40c5-b8aa-464841bb5312","Type":"ContainerDied","Data":"1439d1c3e2ccfdc72e08c312294140176449071f156a77286f33b780ff73c25e"} Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.475709 4688 scope.go:117] "RemoveContainer" containerID="f20886781402bdc1a6e1f1fd09b6dd4974b4bfb05dbd29c4aa89e7d1a33da05d" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.470847 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.507527 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.515042 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.533548 4688 scope.go:117] "RemoveContainer" containerID="c562d4f1116921cb50cb3624f74450cec6bc022a8a664ef6d2b64eda418b69f1" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.548209 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:35 crc kubenswrapper[4688]: E0930 17:33:35.551634 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-log" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.552176 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-log" Sep 30 17:33:35 crc kubenswrapper[4688]: E0930 17:33:35.552464 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-httpd" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.552584 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-httpd" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.552927 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-log" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.553031 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" containerName="glance-httpd" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.554313 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.564609 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.564906 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.621897 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.733958 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734162 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4dqf\" (UniqueName: \"kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.734753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.838960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839022 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839068 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839135 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.839230 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4dqf\" (UniqueName: \"kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.840684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.840816 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.841127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.845696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.851815 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.852490 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.854087 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.870534 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4dqf\" (UniqueName: \"kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.872311 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.895892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.921087 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.940454 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config\") pod \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.940545 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvhs\" (UniqueName: \"kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs\") pod \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.940598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle\") pod \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\" (UID: \"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a\") " Sep 30 17:33:35 crc kubenswrapper[4688]: I0930 17:33:35.946532 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs" (OuterVolumeSpecName: "kube-api-access-7vvhs") pod "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" (UID: "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a"). InnerVolumeSpecName "kube-api-access-7vvhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.005316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config" (OuterVolumeSpecName: "config") pod "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" (UID: "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.007903 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" (UID: "fc8979b0-26c0-428a-bc3e-4ee1d5ea066a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.043658 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.043772 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvhs\" (UniqueName: \"kubernetes.io/projected/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-kube-api-access-7vvhs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.043791 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.539364 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ptk4z" event={"ID":"fc8979b0-26c0-428a-bc3e-4ee1d5ea066a","Type":"ContainerDied","Data":"b578d3113999c6ccd9d2630d9956887f975fb286c0d47b5a80060ec22b69a78b"} Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.539410 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b578d3113999c6ccd9d2630d9956887f975fb286c0d47b5a80060ec22b69a78b" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.539481 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ptk4z" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.549144 4688 generic.go:334] "Generic (PLEG): container finished" podID="74e0dade-3e8d-4c68-b992-e672f563e545" containerID="b5746ffe34dd04272d060188c687ada301704724a0b355fcc9a5ce5be5ead2cc" exitCode=0 Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.549192 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd4xf" event={"ID":"74e0dade-3e8d-4c68-b992-e672f563e545","Type":"ContainerDied","Data":"b5746ffe34dd04272d060188c687ada301704724a0b355fcc9a5ce5be5ead2cc"} Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.690465 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.794447 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:33:36 crc kubenswrapper[4688]: E0930 17:33:36.795005 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" containerName="neutron-db-sync" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.795044 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" containerName="neutron-db-sync" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.795268 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" containerName="neutron-db-sync" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.796566 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.810238 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.866942 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.867087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.867125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.867148 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bzf\" (UniqueName: \"kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.867193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.933833 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.935500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.950085 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.950293 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.950420 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cjvvn" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.950550 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.957276 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.974840 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.974923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.974984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975067 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h597\" (UniqueName: \"kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bzf\" (UniqueName: \"kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975210 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.975249 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.976744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.976772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.977500 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:36 crc kubenswrapper[4688]: I0930 17:33:36.979243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.009353 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bzf\" (UniqueName: \"kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf\") pod \"dnsmasq-dns-fb745b69-g8zw6\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.077003 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h597\" (UniqueName: \"kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.077453 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.077587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.077660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.077718 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.085853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.087421 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.103637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.105410 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h597\" (UniqueName: \"kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.111353 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config\") pod \"neutron-64499996d4-ttr8f\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.124234 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.297345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.441389 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934e6a06-9f81-40c5-b8aa-464841bb5312" path="/var/lib/kubelet/pods/934e6a06-9f81-40c5-b8aa-464841bb5312/volumes" Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.567009 4688 generic.go:334] "Generic (PLEG): container finished" podID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" containerID="090587b96d5ef62200849cd8b892601a761cb9b887bc68f789f3886110d2e3a0" exitCode=0 Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.567158 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7f8lt" event={"ID":"22982660-e8a2-4e49-9b5d-ebc252fd1c8e","Type":"ContainerDied","Data":"090587b96d5ef62200849cd8b892601a761cb9b887bc68f789f3886110d2e3a0"} Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.568667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerStarted","Data":"8d9e34c96578036e0b3c373db441706aab1bf56010be37d06d33bc2ecde9b95b"} Sep 30 17:33:37 crc kubenswrapper[4688]: I0930 17:33:37.818144 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.076144 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.081551 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:33:38 crc kubenswrapper[4688]: W0930 17:33:38.098086 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281614b_02a1_4b5f_a3e7_ab27cb5b2683.slice/crio-9671bd0246a6d0db61c2d009cb47e19f3c6efbea76f1e6633cfe137f99ce8509 WatchSource:0}: Error finding container 9671bd0246a6d0db61c2d009cb47e19f3c6efbea76f1e6633cfe137f99ce8509: Status 404 returned error can't find the container with id 9671bd0246a6d0db61c2d009cb47e19f3c6efbea76f1e6633cfe137f99ce8509 Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107485 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107560 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107641 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107707 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.107766 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6q7v\" (UniqueName: \"kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v\") pod \"74e0dade-3e8d-4c68-b992-e672f563e545\" (UID: \"74e0dade-3e8d-4c68-b992-e672f563e545\") " Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.132913 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.135008 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.137886 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts" (OuterVolumeSpecName: "scripts") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.139266 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v" (OuterVolumeSpecName: "kube-api-access-l6q7v") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "kube-api-access-l6q7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.169623 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.208733 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data" (OuterVolumeSpecName: "config-data") pod "74e0dade-3e8d-4c68-b992-e672f563e545" (UID: "74e0dade-3e8d-4c68-b992-e672f563e545"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.209708 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.210300 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.210412 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6q7v\" (UniqueName: \"kubernetes.io/projected/74e0dade-3e8d-4c68-b992-e672f563e545-kube-api-access-l6q7v\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.210467 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.210522 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.210590 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74e0dade-3e8d-4c68-b992-e672f563e545-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.584990 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerStarted","Data":"b0a6f887c1666cdc016e916168b293a31b068992aebfb6998cb2debf8123edf8"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.604795 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerStarted","Data":"ab370381cc5ea5652d36adf9913981005d1fcd95b8fe553b81c6b36512c0ecd8"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.604866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerStarted","Data":"9671bd0246a6d0db61c2d009cb47e19f3c6efbea76f1e6633cfe137f99ce8509"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.635816 4688 generic.go:334] "Generic (PLEG): container finished" podID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerID="37db38143a59d6c6247de30deb49c761203a23b38faf907b78d0a5d10ef3b749" exitCode=0 Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.636070 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" event={"ID":"137a3eca-a79c-464a-abf5-f44887f43c5b","Type":"ContainerDied","Data":"37db38143a59d6c6247de30deb49c761203a23b38faf907b78d0a5d10ef3b749"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.636163 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" event={"ID":"137a3eca-a79c-464a-abf5-f44887f43c5b","Type":"ContainerStarted","Data":"cd9629813a0fae7c041171086929fb8740499da3153d329cf44a83d17090c62c"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.640757 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd4xf" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.642115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd4xf" event={"ID":"74e0dade-3e8d-4c68-b992-e672f563e545","Type":"ContainerDied","Data":"ca7cc27fb60885886d4b8f7ce8ecb27b6ac63837af8d3cae15dcbb5b119e0d2f"} Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.642157 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7cc27fb60885886d4b8f7ce8ecb27b6ac63837af8d3cae15dcbb5b119e0d2f" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.777860 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cb94dd69b-49qxn"] Sep 30 17:33:38 crc kubenswrapper[4688]: E0930 17:33:38.778637 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e0dade-3e8d-4c68-b992-e672f563e545" containerName="keystone-bootstrap" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.778686 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e0dade-3e8d-4c68-b992-e672f563e545" containerName="keystone-bootstrap" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.778912 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e0dade-3e8d-4c68-b992-e672f563e545" containerName="keystone-bootstrap" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.779821 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.785512 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.786034 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.786293 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.786417 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.788198 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jhgvn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.788416 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.794919 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb94dd69b-49qxn"] Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.838705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-combined-ca-bundle\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853626 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-internal-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853797 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-scripts\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853881 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-credential-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853906 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-config-data\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853977 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-fernet-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.853997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-public-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.854350 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzc9h\" (UniqueName: \"kubernetes.io/projected/6b52ebb3-8313-4e9d-9227-bd89bf88639d-kube-api-access-gzc9h\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956312 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-scripts\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-credential-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956403 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-config-data\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956453 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-fernet-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956469 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-public-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956579 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzc9h\" (UniqueName: \"kubernetes.io/projected/6b52ebb3-8313-4e9d-9227-bd89bf88639d-kube-api-access-gzc9h\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956618 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-combined-ca-bundle\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.956657 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-internal-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.980716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-combined-ca-bundle\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.989229 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-credential-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.989819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-public-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:38 crc kubenswrapper[4688]: I0930 17:33:38.993613 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-internal-tls-certs\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.007503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-scripts\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.007742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-config-data\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.007886 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b52ebb3-8313-4e9d-9227-bd89bf88639d-fernet-keys\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.008257 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzc9h\" (UniqueName: \"kubernetes.io/projected/6b52ebb3-8313-4e9d-9227-bd89bf88639d-kube-api-access-gzc9h\") pod \"keystone-7cb94dd69b-49qxn\" (UID: \"6b52ebb3-8313-4e9d-9227-bd89bf88639d\") " pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.134187 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.306986 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7f8lt" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.370056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle\") pod \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.370411 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data\") pod \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.370637 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs\") pod \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.370666 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjccw\" (UniqueName: \"kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw\") pod \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.370705 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts\") pod \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\" (UID: \"22982660-e8a2-4e49-9b5d-ebc252fd1c8e\") " Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.372768 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs" (OuterVolumeSpecName: "logs") pod "22982660-e8a2-4e49-9b5d-ebc252fd1c8e" (UID: "22982660-e8a2-4e49-9b5d-ebc252fd1c8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.382961 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw" (OuterVolumeSpecName: "kube-api-access-mjccw") pod "22982660-e8a2-4e49-9b5d-ebc252fd1c8e" (UID: "22982660-e8a2-4e49-9b5d-ebc252fd1c8e"). InnerVolumeSpecName "kube-api-access-mjccw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.384130 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts" (OuterVolumeSpecName: "scripts") pod "22982660-e8a2-4e49-9b5d-ebc252fd1c8e" (UID: "22982660-e8a2-4e49-9b5d-ebc252fd1c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.449748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data" (OuterVolumeSpecName: "config-data") pod "22982660-e8a2-4e49-9b5d-ebc252fd1c8e" (UID: "22982660-e8a2-4e49-9b5d-ebc252fd1c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.462472 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22982660-e8a2-4e49-9b5d-ebc252fd1c8e" (UID: "22982660-e8a2-4e49-9b5d-ebc252fd1c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.472409 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.472432 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjccw\" (UniqueName: \"kubernetes.io/projected/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-kube-api-access-mjccw\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.472444 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.472453 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.472465 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22982660-e8a2-4e49-9b5d-ebc252fd1c8e-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.687602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerStarted","Data":"4c2e1e343c264c38fb88512b51dcb2296ee7876e6d54a16baf3972d32cd7d8be"} Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.689281 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.704260 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" event={"ID":"137a3eca-a79c-464a-abf5-f44887f43c5b","Type":"ContainerStarted","Data":"b1e17c5b004f92b7e8f46c642db5f3a2141a3be319ec344d49fc5fcf0775b611"} Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.719390 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7f8lt" event={"ID":"22982660-e8a2-4e49-9b5d-ebc252fd1c8e","Type":"ContainerDied","Data":"3c77c932bea4241e1fcf5156a9f275db1f17c2eaa202bd1436480e8fd9c33330"} Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.719650 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c77c932bea4241e1fcf5156a9f275db1f17c2eaa202bd1436480e8fd9c33330" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.720040 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7f8lt" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.738378 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d8684c999-h2g4c"] Sep 30 17:33:39 crc kubenswrapper[4688]: E0930 17:33:39.739225 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" containerName="placement-db-sync" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.739323 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" containerName="placement-db-sync" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.740675 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64499996d4-ttr8f" podStartSLOduration=3.740653075 podStartE2EDuration="3.740653075s" podCreationTimestamp="2025-09-30 17:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:39.72017352 +0000 UTC m=+1077.015611088" watchObservedRunningTime="2025-09-30 17:33:39.740653075 +0000 UTC m=+1077.036090643" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.741085 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" containerName="placement-db-sync" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.742235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerStarted","Data":"f550d9ff6855f9a686dded1448da6fd152d2f73ad76fd29f8072555db5c56c68"} Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.743657 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.769823 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.772029 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.833624 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d8684c999-h2g4c"] Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.842420 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.842395066 podStartE2EDuration="4.842395066s" podCreationTimestamp="2025-09-30 17:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:39.789961096 +0000 UTC m=+1077.085398664" watchObservedRunningTime="2025-09-30 17:33:39.842395066 +0000 UTC m=+1077.137832624" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.865220 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-67b5cc9656-9lxh4"] Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.867935 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.871689 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.874281 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w6vl7" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.874976 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.875454 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.876633 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.882857 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cb94dd69b-49qxn"] Sep 30 17:33:39 crc kubenswrapper[4688]: W0930 17:33:39.885995 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b52ebb3_8313_4e9d_9227_bd89bf88639d.slice/crio-0831512c0877d034d84c72e4c18ee8e77bac3cfd2bf12820642fd535593d552f WatchSource:0}: Error finding container 0831512c0877d034d84c72e4c18ee8e77bac3cfd2bf12820642fd535593d552f: Status 404 returned error can't find the container with id 0831512c0877d034d84c72e4c18ee8e77bac3cfd2bf12820642fd535593d552f Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.892193 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67b5cc9656-9lxh4"] Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919302 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-internal-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919432 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-public-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-httpd-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919515 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-ovndb-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919549 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-combined-ca-bundle\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919605 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: I0930 17:33:39.919671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9x77\" (UniqueName: \"kubernetes.io/projected/fcb14b73-4266-4adf-a6cf-649855edb179-kube-api-access-q9x77\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:39 crc kubenswrapper[4688]: E0930 17:33:39.923083 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22982660_e8a2_4e49_9b5d_ebc252fd1c8e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22982660_e8a2_4e49_9b5d_ebc252fd1c8e.slice/crio-3c77c932bea4241e1fcf5156a9f275db1f17c2eaa202bd1436480e8fd9c33330\": RecentStats: unable to find data in memory cache]" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f236ad-c05c-4f2e-b68d-d6b867c456f8-logs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021744 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-public-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021773 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-config-data\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021823 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-internal-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-internal-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-public-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021920 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-combined-ca-bundle\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021941 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-httpd-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021961 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-ovndb-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.021987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-combined-ca-bundle\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.022018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.022045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-scripts\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.022073 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hch\" (UniqueName: \"kubernetes.io/projected/27f236ad-c05c-4f2e-b68d-d6b867c456f8-kube-api-access-h5hch\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.022102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9x77\" (UniqueName: \"kubernetes.io/projected/fcb14b73-4266-4adf-a6cf-649855edb179-kube-api-access-q9x77\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.027178 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-combined-ca-bundle\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.027923 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.031216 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-public-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.032082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-ovndb-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.033297 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-internal-tls-certs\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.037969 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fcb14b73-4266-4adf-a6cf-649855edb179-httpd-config\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.047364 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9x77\" (UniqueName: \"kubernetes.io/projected/fcb14b73-4266-4adf-a6cf-649855edb179-kube-api-access-q9x77\") pod \"neutron-5d8684c999-h2g4c\" (UID: \"fcb14b73-4266-4adf-a6cf-649855edb179\") " pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-internal-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124783 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-combined-ca-bundle\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124873 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-scripts\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hch\" (UniqueName: \"kubernetes.io/projected/27f236ad-c05c-4f2e-b68d-d6b867c456f8-kube-api-access-h5hch\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124966 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f236ad-c05c-4f2e-b68d-d6b867c456f8-logs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.124991 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-public-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.125019 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-config-data\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.127260 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27f236ad-c05c-4f2e-b68d-d6b867c456f8-logs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.129732 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.141862 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-internal-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.142917 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-scripts\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.143308 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-public-tls-certs\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.145830 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-config-data\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.152805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hch\" (UniqueName: \"kubernetes.io/projected/27f236ad-c05c-4f2e-b68d-d6b867c456f8-kube-api-access-h5hch\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.152924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27f236ad-c05c-4f2e-b68d-d6b867c456f8-combined-ca-bundle\") pod \"placement-67b5cc9656-9lxh4\" (UID: \"27f236ad-c05c-4f2e-b68d-d6b867c456f8\") " pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.197218 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.759773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb94dd69b-49qxn" event={"ID":"6b52ebb3-8313-4e9d-9227-bd89bf88639d","Type":"ContainerStarted","Data":"afa4f8ac39db54b6b56beb1a7a1eae54b18d50de1583f3cfd84c69313959e403"} Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.762126 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.762240 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cb94dd69b-49qxn" event={"ID":"6b52ebb3-8313-4e9d-9227-bd89bf88639d","Type":"ContainerStarted","Data":"0831512c0877d034d84c72e4c18ee8e77bac3cfd2bf12820642fd535593d552f"} Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.762264 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.884810 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cb94dd69b-49qxn" podStartSLOduration=2.884790282 podStartE2EDuration="2.884790282s" podCreationTimestamp="2025-09-30 17:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:40.82709095 +0000 UTC m=+1078.122528518" watchObservedRunningTime="2025-09-30 17:33:40.884790282 +0000 UTC m=+1078.180227840" Sep 30 17:33:40 crc kubenswrapper[4688]: I0930 17:33:40.888904 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" podStartSLOduration=4.888886866 podStartE2EDuration="4.888886866s" podCreationTimestamp="2025-09-30 17:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:40.883714205 +0000 UTC m=+1078.179151763" watchObservedRunningTime="2025-09-30 17:33:40.888886866 +0000 UTC m=+1078.184324424" Sep 30 17:33:41 crc kubenswrapper[4688]: I0930 17:33:41.335900 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:41 crc kubenswrapper[4688]: I0930 17:33:41.336596 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:33:41 crc kubenswrapper[4688]: I0930 17:33:41.761658 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:41 crc kubenswrapper[4688]: I0930 17:33:41.762214 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:33:41 crc kubenswrapper[4688]: I0930 17:33:41.764146 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dddb98558-nts96" podUID="c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Sep 30 17:33:42 crc kubenswrapper[4688]: I0930 17:33:42.292516 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.661729 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.662715 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.715578 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.725242 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.790280 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:43 crc kubenswrapper[4688]: I0930 17:33:43.792159 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.354166 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-67b5cc9656-9lxh4"] Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.818080 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerStarted","Data":"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741"} Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.820909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67b5cc9656-9lxh4" event={"ID":"27f236ad-c05c-4f2e-b68d-d6b867c456f8","Type":"ContainerStarted","Data":"b3456d1bf57d7365ffb30ff73d65e7c242aa7019d6f798a22c6ed70c934219bc"} Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.820972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67b5cc9656-9lxh4" event={"ID":"27f236ad-c05c-4f2e-b68d-d6b867c456f8","Type":"ContainerStarted","Data":"75bf7d58c886422731a8fefcfa712bedfadde48621969f81b94507c903d41f10"} Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.822721 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fxx6x" event={"ID":"d1c36448-a984-423c-a9b6-6c4d210ca4b9","Type":"ContainerStarted","Data":"cfd7b33eae57eeb85f8259223958cd92c821949ba5867a79789f7e3a1cb9b33b"} Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.835831 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d8684c999-h2g4c"] Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.844669 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fxx6x" podStartSLOduration=2.459815861 podStartE2EDuration="52.844643993s" podCreationTimestamp="2025-09-30 17:32:53 +0000 UTC" firstStartedPulling="2025-09-30 17:32:54.893700941 +0000 UTC m=+1032.189138489" lastFinishedPulling="2025-09-30 17:33:45.278529063 +0000 UTC m=+1082.573966621" observedRunningTime="2025-09-30 17:33:45.843797982 +0000 UTC m=+1083.139235540" watchObservedRunningTime="2025-09-30 17:33:45.844643993 +0000 UTC m=+1083.140081551" Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.922018 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:33:45 crc kubenswrapper[4688]: I0930 17:33:45.923744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.012017 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.025895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.593977 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.595026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.857402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-67b5cc9656-9lxh4" event={"ID":"27f236ad-c05c-4f2e-b68d-d6b867c456f8","Type":"ContainerStarted","Data":"80c28b2cde8ef0c1658ffae716e80eaa653c8ba8b047b0ae9805fa393bc135db"} Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.857827 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.857877 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.880125 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8684c999-h2g4c" event={"ID":"fcb14b73-4266-4adf-a6cf-649855edb179","Type":"ContainerStarted","Data":"9ed51a10f8932429cf2111719410615cb5ac542310952d82f51f25aa26a147c4"} Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.880193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8684c999-h2g4c" event={"ID":"fcb14b73-4266-4adf-a6cf-649855edb179","Type":"ContainerStarted","Data":"b2dc6950498225339502b959edcfc81bdb19808ca6e374c81b9448750cc67b07"} Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.885352 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-67b5cc9656-9lxh4" podStartSLOduration=7.885330987 podStartE2EDuration="7.885330987s" podCreationTimestamp="2025-09-30 17:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:46.884880026 +0000 UTC m=+1084.180317584" watchObservedRunningTime="2025-09-30 17:33:46.885330987 +0000 UTC m=+1084.180768545" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.899833 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c6cps" event={"ID":"8fc61503-23f4-4eb4-b3fa-e1825b84881f","Type":"ContainerStarted","Data":"4f5e0cd6e7f3dba6e718798e89abcd76bd080a628d6dd15364099bbc9e1bd865"} Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.900663 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.901999 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:33:46 crc kubenswrapper[4688]: I0930 17:33:46.958390 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-c6cps" podStartSLOduration=3.943562977 podStartE2EDuration="54.958368996s" podCreationTimestamp="2025-09-30 17:32:52 +0000 UTC" firstStartedPulling="2025-09-30 17:32:54.265063978 +0000 UTC m=+1031.560501536" lastFinishedPulling="2025-09-30 17:33:45.279869997 +0000 UTC m=+1082.575307555" observedRunningTime="2025-09-30 17:33:46.943219484 +0000 UTC m=+1084.238657062" watchObservedRunningTime="2025-09-30 17:33:46.958368996 +0000 UTC m=+1084.253806554" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.127772 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.251603 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.251949 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="dnsmasq-dns" containerID="cri-o://3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786" gracePeriod=10 Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.474828 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.925445 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.931111 4688 generic.go:334] "Generic (PLEG): container finished" podID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerID="3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786" exitCode=0 Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.931259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" event={"ID":"2da37fae-0534-46b8-bd59-8d4a3c6fffd5","Type":"ContainerDied","Data":"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786"} Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.931301 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" event={"ID":"2da37fae-0534-46b8-bd59-8d4a3c6fffd5","Type":"ContainerDied","Data":"3f36b94c397dbb2d86221fe3df0a1571c40c41e8face95de238de6915fccc268"} Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.931327 4688 scope.go:117] "RemoveContainer" containerID="3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.940222 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d8684c999-h2g4c" event={"ID":"fcb14b73-4266-4adf-a6cf-649855edb179","Type":"ContainerStarted","Data":"a82f88404b6ef992e9b1931171101c0b3e95316ead5f104c9b827bca917a4c9c"} Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.940269 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:33:47 crc kubenswrapper[4688]: I0930 17:33:47.996194 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d8684c999-h2g4c" podStartSLOduration=8.996171837 podStartE2EDuration="8.996171837s" podCreationTimestamp="2025-09-30 17:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:33:47.980906503 +0000 UTC m=+1085.276344061" watchObservedRunningTime="2025-09-30 17:33:47.996171837 +0000 UTC m=+1085.291609395" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.008173 4688 scope.go:117] "RemoveContainer" containerID="63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.070322 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config\") pod \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.070400 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc\") pod \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.070597 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb\") pod \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.070668 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67kjr\" (UniqueName: \"kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr\") pod \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.070744 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb\") pod \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\" (UID: \"2da37fae-0534-46b8-bd59-8d4a3c6fffd5\") " Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.085766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr" (OuterVolumeSpecName: "kube-api-access-67kjr") pod "2da37fae-0534-46b8-bd59-8d4a3c6fffd5" (UID: "2da37fae-0534-46b8-bd59-8d4a3c6fffd5"). InnerVolumeSpecName "kube-api-access-67kjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.109717 4688 scope.go:117] "RemoveContainer" containerID="3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786" Sep 30 17:33:48 crc kubenswrapper[4688]: E0930 17:33:48.119306 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786\": container with ID starting with 3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786 not found: ID does not exist" containerID="3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.119372 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786"} err="failed to get container status \"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786\": rpc error: code = NotFound desc = could not find container \"3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786\": container with ID starting with 3a7086af2ac81e0948487475733f27f07cfb45cc2bf752d613f4b4f1fe5f5786 not found: ID does not exist" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.119420 4688 scope.go:117] "RemoveContainer" containerID="63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3" Sep 30 17:33:48 crc kubenswrapper[4688]: E0930 17:33:48.126480 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3\": container with ID starting with 63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3 not found: ID does not exist" containerID="63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.126862 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3"} err="failed to get container status \"63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3\": rpc error: code = NotFound desc = could not find container \"63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3\": container with ID starting with 63642800b4ab6bf75ce6d028d43b691d7084deea8773e014d2d4fe84016702e3 not found: ID does not exist" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.173316 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67kjr\" (UniqueName: \"kubernetes.io/projected/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-kube-api-access-67kjr\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.177394 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config" (OuterVolumeSpecName: "config") pod "2da37fae-0534-46b8-bd59-8d4a3c6fffd5" (UID: "2da37fae-0534-46b8-bd59-8d4a3c6fffd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.191071 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2da37fae-0534-46b8-bd59-8d4a3c6fffd5" (UID: "2da37fae-0534-46b8-bd59-8d4a3c6fffd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.198271 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2da37fae-0534-46b8-bd59-8d4a3c6fffd5" (UID: "2da37fae-0534-46b8-bd59-8d4a3c6fffd5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.228588 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2da37fae-0534-46b8-bd59-8d4a3c6fffd5" (UID: "2da37fae-0534-46b8-bd59-8d4a3c6fffd5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.276087 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.276142 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.276156 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.276167 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da37fae-0534-46b8-bd59-8d4a3c6fffd5-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.948849 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-5ljcb" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.948875 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.949265 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:33:48 crc kubenswrapper[4688]: I0930 17:33:48.981540 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:33:49 crc kubenswrapper[4688]: I0930 17:33:49.003923 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-5ljcb"] Sep 30 17:33:49 crc kubenswrapper[4688]: I0930 17:33:49.444167 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" path="/var/lib/kubelet/pods/2da37fae-0534-46b8-bd59-8d4a3c6fffd5/volumes" Sep 30 17:33:50 crc kubenswrapper[4688]: I0930 17:33:50.164171 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:33:50 crc kubenswrapper[4688]: I0930 17:33:50.164316 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:33:50 crc kubenswrapper[4688]: I0930 17:33:50.425771 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:33:51 crc kubenswrapper[4688]: I0930 17:33:51.342908 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Sep 30 17:33:51 crc kubenswrapper[4688]: I0930 17:33:51.762422 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5dddb98558-nts96" podUID="c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Sep 30 17:33:52 crc kubenswrapper[4688]: E0930 17:33:52.128622 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:33:52 crc kubenswrapper[4688]: I0930 17:33:52.547558 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:33:52 crc kubenswrapper[4688]: I0930 17:33:52.548065 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:33:52 crc kubenswrapper[4688]: I0930 17:33:52.985160 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:33:57 crc kubenswrapper[4688]: I0930 17:33:57.205170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:33:57 crc kubenswrapper[4688]: E0930 17:33:57.205481 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:33:57 crc kubenswrapper[4688]: E0930 17:33:57.206080 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:33:57 crc kubenswrapper[4688]: E0930 17:33:57.206155 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:35:59.206132343 +0000 UTC m=+1216.501569901 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038498 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerStarted","Data":"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9"} Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038918 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038873 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="sg-core" containerID="cri-o://b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741" gracePeriod=30 Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038938 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-notification-agent" containerID="cri-o://7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca" gracePeriod=30 Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038931 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="proxy-httpd" containerID="cri-o://ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9" gracePeriod=30 Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.038817 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-central-agent" containerID="cri-o://a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46" gracePeriod=30 Sep 30 17:33:58 crc kubenswrapper[4688]: I0930 17:33:58.070937 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.140684518 podStartE2EDuration="1m7.070897809s" podCreationTimestamp="2025-09-30 17:32:51 +0000 UTC" firstStartedPulling="2025-09-30 17:32:53.517767339 +0000 UTC m=+1030.813204897" lastFinishedPulling="2025-09-30 17:33:57.44798063 +0000 UTC m=+1094.743418188" observedRunningTime="2025-09-30 17:33:58.065814651 +0000 UTC m=+1095.361252229" watchObservedRunningTime="2025-09-30 17:33:58.070897809 +0000 UTC m=+1095.366335377" Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.058352 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" containerID="cfd7b33eae57eeb85f8259223958cd92c821949ba5867a79789f7e3a1cb9b33b" exitCode=0 Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.058439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fxx6x" event={"ID":"d1c36448-a984-423c-a9b6-6c4d210ca4b9","Type":"ContainerDied","Data":"cfd7b33eae57eeb85f8259223958cd92c821949ba5867a79789f7e3a1cb9b33b"} Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063706 4688 generic.go:334] "Generic (PLEG): container finished" podID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerID="ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9" exitCode=0 Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063742 4688 generic.go:334] "Generic (PLEG): container finished" podID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerID="b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741" exitCode=2 Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063753 4688 generic.go:334] "Generic (PLEG): container finished" podID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerID="a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46" exitCode=0 Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerDied","Data":"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9"} Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063795 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerDied","Data":"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741"} Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.063808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerDied","Data":"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46"} Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.839523 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.965689 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.965763 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.965829 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.965931 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.966018 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.966055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.966087 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktjr2\" (UniqueName: \"kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2\") pod \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\" (UID: \"e40835d5-e9be-4a92-9b45-ceb6d0f9983d\") " Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.966784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.967009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.973855 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2" (OuterVolumeSpecName: "kube-api-access-ktjr2") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "kube-api-access-ktjr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:33:59 crc kubenswrapper[4688]: I0930 17:33:59.976782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts" (OuterVolumeSpecName: "scripts") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.022595 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.052027 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069000 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069102 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069120 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069134 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069146 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.069157 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktjr2\" (UniqueName: \"kubernetes.io/projected/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-kube-api-access-ktjr2\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.077047 4688 generic.go:334] "Generic (PLEG): container finished" podID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerID="7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca" exitCode=0 Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.077155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerDied","Data":"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca"} Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.077233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e40835d5-e9be-4a92-9b45-ceb6d0f9983d","Type":"ContainerDied","Data":"d1f2f839a1f0ad629e635a8384bb260a91b7fd243e30bce6e25535903ef652f1"} Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.077267 4688 scope.go:117] "RemoveContainer" containerID="ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.078274 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.081591 4688 generic.go:334] "Generic (PLEG): container finished" podID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" containerID="4f5e0cd6e7f3dba6e718798e89abcd76bd080a628d6dd15364099bbc9e1bd865" exitCode=0 Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.081936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c6cps" event={"ID":"8fc61503-23f4-4eb4-b3fa-e1825b84881f","Type":"ContainerDied","Data":"4f5e0cd6e7f3dba6e718798e89abcd76bd080a628d6dd15364099bbc9e1bd865"} Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.082214 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data" (OuterVolumeSpecName: "config-data") pod "e40835d5-e9be-4a92-9b45-ceb6d0f9983d" (UID: "e40835d5-e9be-4a92-9b45-ceb6d0f9983d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.134997 4688 scope.go:117] "RemoveContainer" containerID="b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.153973 4688 scope.go:117] "RemoveContainer" containerID="7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.175205 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40835d5-e9be-4a92-9b45-ceb6d0f9983d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.183005 4688 scope.go:117] "RemoveContainer" containerID="a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.206849 4688 scope.go:117] "RemoveContainer" containerID="ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.207549 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9\": container with ID starting with ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9 not found: ID does not exist" containerID="ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.207600 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9"} err="failed to get container status \"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9\": rpc error: code = NotFound desc = could not find container \"ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9\": container with ID starting with ae4a8b7b9c77418abe55ca8a400ec42bdfd800838844868e4ccbeff3a6b6daa9 not found: ID does not exist" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.207625 4688 scope.go:117] "RemoveContainer" containerID="b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.208208 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741\": container with ID starting with b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741 not found: ID does not exist" containerID="b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.208279 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741"} err="failed to get container status \"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741\": rpc error: code = NotFound desc = could not find container \"b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741\": container with ID starting with b1140b588c203322e6cc8ae3515be35a73bcc1521e9077a3a8f6a1662dfad741 not found: ID does not exist" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.208326 4688 scope.go:117] "RemoveContainer" containerID="7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.208915 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca\": container with ID starting with 7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca not found: ID does not exist" containerID="7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.209046 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca"} err="failed to get container status \"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca\": rpc error: code = NotFound desc = could not find container \"7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca\": container with ID starting with 7d3d83d00909bd60bbfa964d01716a18376ce984494ef93e3962d726ee5393ca not found: ID does not exist" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.209136 4688 scope.go:117] "RemoveContainer" containerID="a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.209601 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46\": container with ID starting with a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46 not found: ID does not exist" containerID="a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.209643 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46"} err="failed to get container status \"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46\": rpc error: code = NotFound desc = could not find container \"a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46\": container with ID starting with a230abf04881abe26962b9499a03231d942be2ae49bd691d2dbeab4dc8c7db46 not found: ID does not exist" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.342792 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.418925 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.432459 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.440956 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441458 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-notification-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441487 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-notification-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441506 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-central-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441517 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-central-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441528 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="proxy-httpd" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441536 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="proxy-httpd" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441555 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" containerName="barbican-db-sync" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441579 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" containerName="barbican-db-sync" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441591 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="sg-core" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441600 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="sg-core" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441621 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="init" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441631 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="init" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.441655 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="dnsmasq-dns" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441662 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="dnsmasq-dns" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441878 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="sg-core" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441903 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da37fae-0534-46b8-bd59-8d4a3c6fffd5" containerName="dnsmasq-dns" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441917 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-central-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441928 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" containerName="barbican-db-sync" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441945 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="ceilometer-notification-agent" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.441954 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" containerName="proxy-httpd" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.444073 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.451080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.451245 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.463530 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.482688 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v477x\" (UniqueName: \"kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x\") pod \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.482769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data\") pod \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.483129 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle\") pod \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\" (UID: \"d1c36448-a984-423c-a9b6-6c4d210ca4b9\") " Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.500213 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d1c36448-a984-423c-a9b6-6c4d210ca4b9" (UID: "d1c36448-a984-423c-a9b6-6c4d210ca4b9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.500254 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x" (OuterVolumeSpecName: "kube-api-access-v477x") pod "d1c36448-a984-423c-a9b6-6c4d210ca4b9" (UID: "d1c36448-a984-423c-a9b6-6c4d210ca4b9"). InnerVolumeSpecName "kube-api-access-v477x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.527934 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1c36448-a984-423c-a9b6-6c4d210ca4b9" (UID: "d1c36448-a984-423c-a9b6-6c4d210ca4b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:00 crc kubenswrapper[4688]: E0930 17:34:00.564803 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode40835d5_e9be_4a92_9b45_ceb6d0f9983d.slice\": RecentStats: unable to find data in memory cache]" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.584819 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.584867 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhcv\" (UniqueName: \"kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.584902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.584952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585116 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585224 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585236 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v477x\" (UniqueName: \"kubernetes.io/projected/d1c36448-a984-423c-a9b6-6c4d210ca4b9-kube-api-access-v477x\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.585247 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d1c36448-a984-423c-a9b6-6c4d210ca4b9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.687161 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.687461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhcv\" (UniqueName: \"kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.687587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.687713 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.687842 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.688094 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.688221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.688562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.688778 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.693759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.694283 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.694788 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.695277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.715444 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhcv\" (UniqueName: \"kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv\") pod \"ceilometer-0\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " pod="openstack/ceilometer-0" Sep 30 17:34:00 crc kubenswrapper[4688]: I0930 17:34:00.783661 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.100129 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fxx6x" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.100279 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fxx6x" event={"ID":"d1c36448-a984-423c-a9b6-6c4d210ca4b9","Type":"ContainerDied","Data":"269fee16a8ebecb22df963e8c2cc9d1f8328ceca85a486a4d01b6e246be44e23"} Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.100713 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269fee16a8ebecb22df963e8c2cc9d1f8328ceca85a486a4d01b6e246be44e23" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.234725 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.377240 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.379915 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.389115 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.390078 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-66df74b4fc-5jlzp"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.392144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.394096 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.394252 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-jw2t8" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.394466 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.401388 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.414009 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66df74b4fc-5jlzp"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.510106 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40835d5-e9be-4a92-9b45-ceb6d0f9983d" path="/var/lib/kubelet/pods/e40835d5-e9be-4a92-9b45-ceb6d0f9983d/volumes" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.539072 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c6cps" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549176 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfc28\" (UniqueName: \"kubernetes.io/projected/8f4dccc2-3d97-4492-acb8-a6e080810069-kube-api-access-mfc28\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549303 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549352 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data-custom\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549440 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-combined-ca-bundle\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549488 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data-custom\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.549534 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f4dccc2-3d97-4492-acb8-a6e080810069-logs\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.552455 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-combined-ca-bundle\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.552612 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7vr\" (UniqueName: \"kubernetes.io/projected/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-kube-api-access-vn7vr\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.552715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.578520 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:01 crc kubenswrapper[4688]: E0930 17:34:01.579329 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" containerName="cinder-db-sync" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.579355 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" containerName="cinder-db-sync" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.580080 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" containerName="cinder-db-sync" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.591456 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-logs\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.601499 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.624062 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.666863 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.669474 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.679850 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.682224 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.697721 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.697843 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.697879 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.697941 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.698028 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.698265 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4rwj\" (UniqueName: \"kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj\") pod \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\" (UID: \"8fc61503-23f4-4eb4-b3fa-e1825b84881f\") " Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vskk\" (UniqueName: \"kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-logs\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699753 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfc28\" (UniqueName: \"kubernetes.io/projected/8f4dccc2-3d97-4492-acb8-a6e080810069-kube-api-access-mfc28\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.699812 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data-custom\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700292 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-combined-ca-bundle\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data-custom\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f4dccc2-3d97-4492-acb8-a6e080810069-logs\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-combined-ca-bundle\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7vr\" (UniqueName: \"kubernetes.io/projected/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-kube-api-access-vn7vr\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700797 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.700916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.702457 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-logs\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.706771 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f4dccc2-3d97-4492-acb8-a6e080810069-logs\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.706824 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.715713 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.716724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts" (OuterVolumeSpecName: "scripts") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.717039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data-custom\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.717541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data-custom\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.718339 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-config-data\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.723184 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj" (OuterVolumeSpecName: "kube-api-access-b4rwj") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "kube-api-access-b4rwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.723703 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-config-data\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.726302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-combined-ca-bundle\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.727038 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f4dccc2-3d97-4492-acb8-a6e080810069-combined-ca-bundle\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.728462 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfc28\" (UniqueName: \"kubernetes.io/projected/8f4dccc2-3d97-4492-acb8-a6e080810069-kube-api-access-mfc28\") pod \"barbican-keystone-listener-7bdbc5f8bb-9xltz\" (UID: \"8f4dccc2-3d97-4492-acb8-a6e080810069\") " pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.729053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7vr\" (UniqueName: \"kubernetes.io/projected/69fd9ea3-7c23-450d-b3e9-b5bb57e4f324-kube-api-access-vn7vr\") pod \"barbican-worker-66df74b4fc-5jlzp\" (UID: \"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324\") " pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.748619 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66df74b4fc-5jlzp" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.771319 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.800741 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data" (OuterVolumeSpecName: "config-data") pod "8fc61503-23f4-4eb4-b3fa-e1825b84881f" (UID: "8fc61503-23f4-4eb4-b3fa-e1825b84881f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.804347 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vskk\" (UniqueName: \"kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805343 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805396 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99w69\" (UniqueName: \"kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805444 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805488 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805539 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805625 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805792 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805817 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4rwj\" (UniqueName: \"kubernetes.io/projected/8fc61503-23f4-4eb4-b3fa-e1825b84881f-kube-api-access-b4rwj\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805839 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805852 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805864 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc61503-23f4-4eb4-b3fa-e1825b84881f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.805879 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8fc61503-23f4-4eb4-b3fa-e1825b84881f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.806819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.807030 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.807552 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.808182 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.828433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vskk\" (UniqueName: \"kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk\") pod \"dnsmasq-dns-7d649d8c65-hksmd\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.908279 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.908401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.908497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.908600 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.908634 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99w69\" (UniqueName: \"kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.909798 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.919364 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.920818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.920920 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.932267 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99w69\" (UniqueName: \"kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69\") pod \"barbican-api-5584f7f6cd-rpz5j\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.936990 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:01 crc kubenswrapper[4688]: I0930 17:34:01.996192 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.025310 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.133435 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c6cps" event={"ID":"8fc61503-23f4-4eb4-b3fa-e1825b84881f","Type":"ContainerDied","Data":"6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807"} Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.133769 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0c546525639c499e4834852cc200703f6a0118ec580837bec64f4fe9012807" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.133861 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c6cps" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.156493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerStarted","Data":"a97fa7da64f1af385774c8fab307c8d6adce6621a08e03c87de99cb924cd7b21"} Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.340032 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66df74b4fc-5jlzp"] Sep 30 17:34:02 crc kubenswrapper[4688]: W0930 17:34:02.367253 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fd9ea3_7c23_450d_b3e9_b5bb57e4f324.slice/crio-067bd83ac52a3ac0d5c45c20ffd494cff2c31aad59345d74bc49aba1537875fe WatchSource:0}: Error finding container 067bd83ac52a3ac0d5c45c20ffd494cff2c31aad59345d74bc49aba1537875fe: Status 404 returned error can't find the container with id 067bd83ac52a3ac0d5c45c20ffd494cff2c31aad59345d74bc49aba1537875fe Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.373080 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.375123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.398927 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.409157 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.409429 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-btfmv" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.409663 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.409783 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.423905 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.557462 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.559057 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.559096 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.559185 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspbj\" (UniqueName: \"kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.559524 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.559629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.587443 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.589738 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.624275 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.638696 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.666963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.667066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.667106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.667131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.667175 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspbj\" (UniqueName: \"kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.667338 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.674439 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.674551 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.676562 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.683452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.687829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.705138 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspbj\" (UniqueName: \"kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj\") pod \"cinder-scheduler-0\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.713724 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.716410 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.727995 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.759260 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.770294 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wdv\" (UniqueName: \"kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.770445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.770519 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.770543 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.770595 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.790268 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872329 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872383 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872463 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872487 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872537 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872582 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4wdv\" (UniqueName: \"kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.872706 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c79\" (UniqueName: \"kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.873697 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.874243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.874802 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.875327 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.897110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4wdv\" (UniqueName: \"kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv\") pod \"dnsmasq-dns-57fff66767-9ptsq\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.927391 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.976747 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.976844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.976897 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.976934 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.976968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c79\" (UniqueName: \"kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.977010 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.977037 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.977831 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.978050 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.985884 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:02 crc kubenswrapper[4688]: I0930 17:34:02.990499 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.002684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.034809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.040039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c79\" (UniqueName: \"kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79\") pod \"cinder-api-0\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " pod="openstack/cinder-api-0" Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.076566 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.131166 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz"] Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.181643 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.233681 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerStarted","Data":"3a6605ee0f7aec514aa3fb9b73e106e2e56d3f1b2b8ee1276f94d9ab9afde87a"} Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.245911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" event={"ID":"c35ff64a-08b3-4ec6-988d-6922734a4aa6","Type":"ContainerStarted","Data":"fc0e48f01722ee77c362fa03ce926a4331e81e9c3186a60660c027bffb7ec302"} Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.252741 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66df74b4fc-5jlzp" event={"ID":"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324","Type":"ContainerStarted","Data":"067bd83ac52a3ac0d5c45c20ffd494cff2c31aad59345d74bc49aba1537875fe"} Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.258127 4688 generic.go:334] "Generic (PLEG): container finished" podID="00ce8852-0763-48b1-ab50-42c386c1065c" containerID="f251b59c0920d79e3e009e427d10d16fb7d2d9bb829eea98102072c8dd09a5d4" exitCode=137 Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.258162 4688 generic.go:334] "Generic (PLEG): container finished" podID="00ce8852-0763-48b1-ab50-42c386c1065c" containerID="17cd8706cdd698803d31c2686d2428856f2f3ad65c1431969d00d1e86cdcc8d2" exitCode=137 Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.258187 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerDied","Data":"f251b59c0920d79e3e009e427d10d16fb7d2d9bb829eea98102072c8dd09a5d4"} Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.258217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerDied","Data":"17cd8706cdd698803d31c2686d2428856f2f3ad65c1431969d00d1e86cdcc8d2"} Sep 30 17:34:03 crc kubenswrapper[4688]: W0930 17:34:03.506075 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa433d78_89e4_43ac_aea2_0ad75a3a916a.slice/crio-a4c98ece9cc8936809b1ef8b53e60f13ae7ba8793b78b01d8e98e1b03ce55a89 WatchSource:0}: Error finding container a4c98ece9cc8936809b1ef8b53e60f13ae7ba8793b78b01d8e98e1b03ce55a89: Status 404 returned error can't find the container with id a4c98ece9cc8936809b1ef8b53e60f13ae7ba8793b78b01d8e98e1b03ce55a89 Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.527338 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.646767 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:34:03 crc kubenswrapper[4688]: I0930 17:34:03.996468 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:04 crc kubenswrapper[4688]: W0930 17:34:04.052099 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f942e26_6864_4a49_b16e_aa21b1df3d80.slice/crio-675848dd3f48491ae92fa613fc5d6f26955b0c1d42a926e65e672aa1a5b095a4 WatchSource:0}: Error finding container 675848dd3f48491ae92fa613fc5d6f26955b0c1d42a926e65e672aa1a5b095a4: Status 404 returned error can't find the container with id 675848dd3f48491ae92fa613fc5d6f26955b0c1d42a926e65e672aa1a5b095a4 Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.077468 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.228188 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key\") pod \"00ce8852-0763-48b1-ab50-42c386c1065c\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.228719 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts\") pod \"00ce8852-0763-48b1-ab50-42c386c1065c\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.229033 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data\") pod \"00ce8852-0763-48b1-ab50-42c386c1065c\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.229235 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs\") pod \"00ce8852-0763-48b1-ab50-42c386c1065c\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.229291 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4h68\" (UniqueName: \"kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68\") pod \"00ce8852-0763-48b1-ab50-42c386c1065c\" (UID: \"00ce8852-0763-48b1-ab50-42c386c1065c\") " Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.229897 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs" (OuterVolumeSpecName: "logs") pod "00ce8852-0763-48b1-ab50-42c386c1065c" (UID: "00ce8852-0763-48b1-ab50-42c386c1065c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.230627 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce8852-0763-48b1-ab50-42c386c1065c-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.236016 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68" (OuterVolumeSpecName: "kube-api-access-w4h68") pod "00ce8852-0763-48b1-ab50-42c386c1065c" (UID: "00ce8852-0763-48b1-ab50-42c386c1065c"). InnerVolumeSpecName "kube-api-access-w4h68". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.242737 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00ce8852-0763-48b1-ab50-42c386c1065c" (UID: "00ce8852-0763-48b1-ab50-42c386c1065c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.265061 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data" (OuterVolumeSpecName: "config-data") pod "00ce8852-0763-48b1-ab50-42c386c1065c" (UID: "00ce8852-0763-48b1-ab50-42c386c1065c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.277519 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts" (OuterVolumeSpecName: "scripts") pod "00ce8852-0763-48b1-ab50-42c386c1065c" (UID: "00ce8852-0763-48b1-ab50-42c386c1065c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.303402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" event={"ID":"3f7110ad-b1f2-4abf-be08-2b428fbd4c94","Type":"ContainerStarted","Data":"3a77301ac35c165791d35d49f09b8be70759b4971aa9ac60e3646d1006332b40"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.307649 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" event={"ID":"8f4dccc2-3d97-4492-acb8-a6e080810069","Type":"ContainerStarted","Data":"4f6b418a66b0a1b539bbe0b3bb0534e53f8a17853859fa97c07ed4de96d9f935"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.312427 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerStarted","Data":"bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.312478 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerStarted","Data":"5c5ff6d29309f8103cdcca8c3f4e87770ac104da8853dabef0a3bdf89aa5e06d"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.314641 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerStarted","Data":"a4c98ece9cc8936809b1ef8b53e60f13ae7ba8793b78b01d8e98e1b03ce55a89"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.332772 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4h68\" (UniqueName: \"kubernetes.io/projected/00ce8852-0763-48b1-ab50-42c386c1065c-kube-api-access-w4h68\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.332815 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00ce8852-0763-48b1-ab50-42c386c1065c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.332826 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.332836 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00ce8852-0763-48b1-ab50-42c386c1065c-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.336924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerStarted","Data":"675848dd3f48491ae92fa613fc5d6f26955b0c1d42a926e65e672aa1a5b095a4"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.353077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5668f89c89-nk5pt" event={"ID":"00ce8852-0763-48b1-ab50-42c386c1065c","Type":"ContainerDied","Data":"b9510a01cbfa012449a26f05c5bd4efdcb3bbe95427ccbe70ac53060a8845003"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.353156 4688 scope.go:117] "RemoveContainer" containerID="f251b59c0920d79e3e009e427d10d16fb7d2d9bb829eea98102072c8dd09a5d4" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.353159 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5668f89c89-nk5pt" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.375274 4688 generic.go:334] "Generic (PLEG): container finished" podID="c35ff64a-08b3-4ec6-988d-6922734a4aa6" containerID="785f27d9a035dcc1a13680b40bfccf0e43a86902f5b17830d44f5c5ee077ed80" exitCode=0 Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.375347 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" event={"ID":"c35ff64a-08b3-4ec6-988d-6922734a4aa6","Type":"ContainerDied","Data":"785f27d9a035dcc1a13680b40bfccf0e43a86902f5b17830d44f5c5ee077ed80"} Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.405538 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.415267 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5668f89c89-nk5pt"] Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.947716 4688 scope.go:117] "RemoveContainer" containerID="17cd8706cdd698803d31c2686d2428856f2f3ad65c1431969d00d1e86cdcc8d2" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.955194 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:34:04 crc kubenswrapper[4688]: I0930 17:34:04.963096 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.166324 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.253495 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb\") pod \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.253636 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc\") pod \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.253751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config\") pod \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.253825 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vskk\" (UniqueName: \"kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk\") pod \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.253903 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb\") pod \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\" (UID: \"c35ff64a-08b3-4ec6-988d-6922734a4aa6\") " Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.322466 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk" (OuterVolumeSpecName: "kube-api-access-8vskk") pod "c35ff64a-08b3-4ec6-988d-6922734a4aa6" (UID: "c35ff64a-08b3-4ec6-988d-6922734a4aa6"). InnerVolumeSpecName "kube-api-access-8vskk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.361411 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vskk\" (UniqueName: \"kubernetes.io/projected/c35ff64a-08b3-4ec6-988d-6922734a4aa6-kube-api-access-8vskk\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.369782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config" (OuterVolumeSpecName: "config") pod "c35ff64a-08b3-4ec6-988d-6922734a4aa6" (UID: "c35ff64a-08b3-4ec6-988d-6922734a4aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.416014 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.421246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerStarted","Data":"f467955e8e74e51901629bb1bdcaaecaf5621dfc2d42c8aa59e5d8348a71a97b"} Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.431962 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.440420 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerID="da8b92bc82f6033da9bde46066f8665903835e7758b161f763d922569441d44a" exitCode=0 Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.446849 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" path="/var/lib/kubelet/pods/00ce8852-0763-48b1-ab50-42c386c1065c/volumes" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.463925 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.476718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c35ff64a-08b3-4ec6-988d-6922734a4aa6" (UID: "c35ff64a-08b3-4ec6-988d-6922734a4aa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.524723 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c35ff64a-08b3-4ec6-988d-6922734a4aa6" (UID: "c35ff64a-08b3-4ec6-988d-6922734a4aa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.558460 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.558525 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d649d8c65-hksmd" event={"ID":"c35ff64a-08b3-4ec6-988d-6922734a4aa6","Type":"ContainerDied","Data":"fc0e48f01722ee77c362fa03ce926a4331e81e9c3186a60660c027bffb7ec302"} Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.558875 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.558897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" event={"ID":"3f7110ad-b1f2-4abf-be08-2b428fbd4c94","Type":"ContainerDied","Data":"da8b92bc82f6033da9bde46066f8665903835e7758b161f763d922569441d44a"} Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.558913 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerStarted","Data":"4802b48bdd9da15bbafc48cff8ce4909f097635ff4250b8d5b4030af83be3c40"} Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.573169 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.573540 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.575539 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podStartSLOduration=4.575517511 podStartE2EDuration="4.575517511s" podCreationTimestamp="2025-09-30 17:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:05.541360851 +0000 UTC m=+1102.836798419" watchObservedRunningTime="2025-09-30 17:34:05.575517511 +0000 UTC m=+1102.870955069" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.598260 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c35ff64a-08b3-4ec6-988d-6922734a4aa6" (UID: "c35ff64a-08b3-4ec6-988d-6922734a4aa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.676218 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c35ff64a-08b3-4ec6-988d-6922734a4aa6-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.862625 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:05 crc kubenswrapper[4688]: I0930 17:34:05.872069 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d649d8c65-hksmd"] Sep 30 17:34:06 crc kubenswrapper[4688]: I0930 17:34:06.462168 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerStarted","Data":"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203"} Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.249246 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5dddb98558-nts96" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.311757 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.343043 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.343321 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon-log" containerID="cri-o://6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a" gracePeriod=30 Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.343476 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" containerID="cri-o://0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3" gracePeriod=30 Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.357030 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.478365 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35ff64a-08b3-4ec6-988d-6922734a4aa6" path="/var/lib/kubelet/pods/c35ff64a-08b3-4ec6-988d-6922734a4aa6/volumes" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.558725 4688 scope.go:117] "RemoveContainer" containerID="785f27d9a035dcc1a13680b40bfccf0e43a86902f5b17830d44f5c5ee077ed80" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.687641 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55f77664db-t4vhn"] Sep 30 17:34:07 crc kubenswrapper[4688]: E0930 17:34:07.688237 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.688260 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon" Sep 30 17:34:07 crc kubenswrapper[4688]: E0930 17:34:07.688289 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35ff64a-08b3-4ec6-988d-6922734a4aa6" containerName="init" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.688297 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35ff64a-08b3-4ec6-988d-6922734a4aa6" containerName="init" Sep 30 17:34:07 crc kubenswrapper[4688]: E0930 17:34:07.688311 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon-log" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.688319 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon-log" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.689318 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon-log" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.689351 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ce8852-0763-48b1-ab50-42c386c1065c" containerName="horizon" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.689368 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35ff64a-08b3-4ec6-988d-6922734a4aa6" containerName="init" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.690928 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.693564 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.693902 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.710619 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f77664db-t4vhn"] Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836471 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-combined-ca-bundle\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836587 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-public-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836627 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data-custom\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f423e22-c7d3-4385-82a5-dca95aef82a0-logs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836691 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836746 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s48\" (UniqueName: \"kubernetes.io/projected/4f423e22-c7d3-4385-82a5-dca95aef82a0-kube-api-access-z6s48\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.836788 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-internal-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.939762 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f423e22-c7d3-4385-82a5-dca95aef82a0-logs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.939852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.940024 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s48\" (UniqueName: \"kubernetes.io/projected/4f423e22-c7d3-4385-82a5-dca95aef82a0-kube-api-access-z6s48\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.940227 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-internal-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.940373 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-combined-ca-bundle\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.940439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-public-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.940338 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f423e22-c7d3-4385-82a5-dca95aef82a0-logs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.942328 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data-custom\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.947942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-combined-ca-bundle\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.948860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data-custom\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.948933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-public-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.953282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-internal-tls-certs\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.960094 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s48\" (UniqueName: \"kubernetes.io/projected/4f423e22-c7d3-4385-82a5-dca95aef82a0-kube-api-access-z6s48\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:07 crc kubenswrapper[4688]: I0930 17:34:07.974721 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f423e22-c7d3-4385-82a5-dca95aef82a0-config-data\") pod \"barbican-api-55f77664db-t4vhn\" (UID: \"4f423e22-c7d3-4385-82a5-dca95aef82a0\") " pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:08 crc kubenswrapper[4688]: I0930 17:34:08.017799 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.092498 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f77664db-t4vhn"] Sep 30 17:34:09 crc kubenswrapper[4688]: W0930 17:34:09.098915 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f423e22_c7d3_4385_82a5_dca95aef82a0.slice/crio-91940438ec0b54eca5a0329fa0ea4bce342a0a36d297035fccff165ed980b617 WatchSource:0}: Error finding container 91940438ec0b54eca5a0329fa0ea4bce342a0a36d297035fccff165ed980b617: Status 404 returned error can't find the container with id 91940438ec0b54eca5a0329fa0ea4bce342a0a36d297035fccff165ed980b617 Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.537481 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f77664db-t4vhn" event={"ID":"4f423e22-c7d3-4385-82a5-dca95aef82a0","Type":"ContainerStarted","Data":"03b9c51d5acbcaddcbb5e42117e400c5376bc627d9034d316d6190d922797c05"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.537967 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f77664db-t4vhn" event={"ID":"4f423e22-c7d3-4385-82a5-dca95aef82a0","Type":"ContainerStarted","Data":"91940438ec0b54eca5a0329fa0ea4bce342a0a36d297035fccff165ed980b617"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.545751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerStarted","Data":"b6d590974949056296f6d81ffbdc58f9150be02e76174f87c2f3ef4004eac149"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.574797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" event={"ID":"3f7110ad-b1f2-4abf-be08-2b428fbd4c94","Type":"ContainerStarted","Data":"2add07d18dd42f31e9b7bc40932090d776c67991def224a28974e3729219caa8"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.575086 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.594717 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" event={"ID":"8f4dccc2-3d97-4492-acb8-a6e080810069","Type":"ContainerStarted","Data":"ec1eb56adbd09a10a896dac871d0d3f073235382757d6a5835b188977f1c96b4"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.594779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" event={"ID":"8f4dccc2-3d97-4492-acb8-a6e080810069","Type":"ContainerStarted","Data":"919cb2aba9524c49795921715a586d26c41c04fb35ce99f8f48d0b48cc255a99"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.605005 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" podStartSLOduration=7.604975067 podStartE2EDuration="7.604975067s" podCreationTimestamp="2025-09-30 17:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:09.599026324 +0000 UTC m=+1106.894463902" watchObservedRunningTime="2025-09-30 17:34:09.604975067 +0000 UTC m=+1106.900412635" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.610249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerStarted","Data":"253ec36853626ca1fa7a5cf65e2ca22d8d5ba792005107de40a08eca876e7cf9"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.626916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerStarted","Data":"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.627148 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api-log" containerID="cri-o://3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203" gracePeriod=30 Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.627555 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.627631 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api" containerID="cri-o://765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e" gracePeriod=30 Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.643801 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66df74b4fc-5jlzp" event={"ID":"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324","Type":"ContainerStarted","Data":"6b5b4b535ff7deb0e2f97276447a84fe093139f001576c1fc2201f28a8f66e34"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.643871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66df74b4fc-5jlzp" event={"ID":"69fd9ea3-7c23-450d-b3e9-b5bb57e4f324","Type":"ContainerStarted","Data":"7c60281863b5331cba8164be7f54b8006888e56c4c72bfa6a14aed6a2dc9aaf1"} Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.712810 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7bdbc5f8bb-9xltz" podStartSLOduration=3.499253675 podStartE2EDuration="8.712778273s" podCreationTimestamp="2025-09-30 17:34:01 +0000 UTC" firstStartedPulling="2025-09-30 17:34:03.21714319 +0000 UTC m=+1100.512580748" lastFinishedPulling="2025-09-30 17:34:08.430667788 +0000 UTC m=+1105.726105346" observedRunningTime="2025-09-30 17:34:09.627994752 +0000 UTC m=+1106.923432310" watchObservedRunningTime="2025-09-30 17:34:09.712778273 +0000 UTC m=+1107.008215831" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.725319 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.725282307 podStartE2EDuration="7.725282307s" podCreationTimestamp="2025-09-30 17:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:09.70141281 +0000 UTC m=+1106.996850378" watchObservedRunningTime="2025-09-30 17:34:09.725282307 +0000 UTC m=+1107.020719875" Sep 30 17:34:09 crc kubenswrapper[4688]: I0930 17:34:09.746170 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-66df74b4fc-5jlzp" podStartSLOduration=2.7589104989999997 podStartE2EDuration="8.746144356s" podCreationTimestamp="2025-09-30 17:34:01 +0000 UTC" firstStartedPulling="2025-09-30 17:34:02.419666907 +0000 UTC m=+1099.715104465" lastFinishedPulling="2025-09-30 17:34:08.406900764 +0000 UTC m=+1105.702338322" observedRunningTime="2025-09-30 17:34:09.734405882 +0000 UTC m=+1107.029843440" watchObservedRunningTime="2025-09-30 17:34:09.746144356 +0000 UTC m=+1107.041581914" Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.153859 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d8684c999-h2g4c" Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.236274 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.237020 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64499996d4-ttr8f" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-api" containerID="cri-o://ab370381cc5ea5652d36adf9913981005d1fcd95b8fe553b81c6b36512c0ecd8" gracePeriod=30 Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.237290 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64499996d4-ttr8f" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-httpd" containerID="cri-o://4c2e1e343c264c38fb88512b51dcb2296ee7876e6d54a16baf3972d32cd7d8be" gracePeriod=30 Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.699314 4688 generic.go:334] "Generic (PLEG): container finished" podID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerID="4c2e1e343c264c38fb88512b51dcb2296ee7876e6d54a16baf3972d32cd7d8be" exitCode=0 Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.700647 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerDied","Data":"4c2e1e343c264c38fb88512b51dcb2296ee7876e6d54a16baf3972d32cd7d8be"} Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.732840 4688 generic.go:334] "Generic (PLEG): container finished" podID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerID="3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203" exitCode=143 Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.733398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerDied","Data":"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203"} Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.755210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f77664db-t4vhn" event={"ID":"4f423e22-c7d3-4385-82a5-dca95aef82a0","Type":"ContainerStarted","Data":"2d9f1bd03dfa8e55d88c9f18bc8438f55155760ad4fe307d461d179afeb75cea"} Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.756865 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.756901 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.787874 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55f77664db-t4vhn" podStartSLOduration=3.787857119 podStartE2EDuration="3.787857119s" podCreationTimestamp="2025-09-30 17:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:10.78710239 +0000 UTC m=+1108.082539938" watchObservedRunningTime="2025-09-30 17:34:10.787857119 +0000 UTC m=+1108.083294677" Sep 30 17:34:10 crc kubenswrapper[4688]: I0930 17:34:10.832988 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:57304->10.217.0.148:8443: read: connection reset by peer" Sep 30 17:34:11 crc kubenswrapper[4688]: E0930 17:34:11.002028 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a40d4d2_4534_4959_b23a_07bd542d56b7.slice/crio-0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.337472 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.768022 4688 generic.go:334] "Generic (PLEG): container finished" podID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerID="0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3" exitCode=0 Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.768108 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerDied","Data":"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3"} Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.772061 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerStarted","Data":"6985ee228e8d527447f81d61f1575a514d7f6d182a05ff4b6e03428078b5c10b"} Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.773994 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.777726 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerStarted","Data":"ea952bcf9eecedbc75f60297b97a8a8656045e4810e45dad5a68700ac282440b"} Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.802158 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.519290971 podStartE2EDuration="11.802132613s" podCreationTimestamp="2025-09-30 17:34:00 +0000 UTC" firstStartedPulling="2025-09-30 17:34:01.251938376 +0000 UTC m=+1098.547375934" lastFinishedPulling="2025-09-30 17:34:10.534780018 +0000 UTC m=+1107.830217576" observedRunningTime="2025-09-30 17:34:11.797670928 +0000 UTC m=+1109.093108506" watchObservedRunningTime="2025-09-30 17:34:11.802132613 +0000 UTC m=+1109.097570171" Sep 30 17:34:11 crc kubenswrapper[4688]: I0930 17:34:11.834141 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.011913264 podStartE2EDuration="9.83412244s" podCreationTimestamp="2025-09-30 17:34:02 +0000 UTC" firstStartedPulling="2025-09-30 17:34:03.531065892 +0000 UTC m=+1100.826503450" lastFinishedPulling="2025-09-30 17:34:08.353275078 +0000 UTC m=+1105.648712626" observedRunningTime="2025-09-30 17:34:11.829560562 +0000 UTC m=+1109.124998120" watchObservedRunningTime="2025-09-30 17:34:11.83412244 +0000 UTC m=+1109.129559998" Sep 30 17:34:12 crc kubenswrapper[4688]: I0930 17:34:12.179470 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:34:12 crc kubenswrapper[4688]: I0930 17:34:12.669301 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cb94dd69b-49qxn" Sep 30 17:34:12 crc kubenswrapper[4688]: I0930 17:34:12.791864 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 17:34:13 crc kubenswrapper[4688]: I0930 17:34:13.251528 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-67b5cc9656-9lxh4" Sep 30 17:34:14 crc kubenswrapper[4688]: I0930 17:34:14.497354 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:15 crc kubenswrapper[4688]: I0930 17:34:15.060018 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:15 crc kubenswrapper[4688]: I0930 17:34:15.854899 4688 generic.go:334] "Generic (PLEG): container finished" podID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerID="ab370381cc5ea5652d36adf9913981005d1fcd95b8fe553b81c6b36512c0ecd8" exitCode=0 Sep 30 17:34:15 crc kubenswrapper[4688]: I0930 17:34:15.854970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerDied","Data":"ab370381cc5ea5652d36adf9913981005d1fcd95b8fe553b81c6b36512c0ecd8"} Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.021782 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.185297 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config\") pod \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.186146 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs\") pod \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.186318 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config\") pod \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.186453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle\") pod \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.186536 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h597\" (UniqueName: \"kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597\") pod \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\" (UID: \"3281614b-02a1-4b5f-a3e7-ab27cb5b2683\") " Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.195717 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3281614b-02a1-4b5f-a3e7-ab27cb5b2683" (UID: "3281614b-02a1-4b5f-a3e7-ab27cb5b2683"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.218747 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597" (OuterVolumeSpecName: "kube-api-access-7h597") pod "3281614b-02a1-4b5f-a3e7-ab27cb5b2683" (UID: "3281614b-02a1-4b5f-a3e7-ab27cb5b2683"). InnerVolumeSpecName "kube-api-access-7h597". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.264436 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3281614b-02a1-4b5f-a3e7-ab27cb5b2683" (UID: "3281614b-02a1-4b5f-a3e7-ab27cb5b2683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.271635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config" (OuterVolumeSpecName: "config") pod "3281614b-02a1-4b5f-a3e7-ab27cb5b2683" (UID: "3281614b-02a1-4b5f-a3e7-ab27cb5b2683"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290099 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3281614b-02a1-4b5f-a3e7-ab27cb5b2683" (UID: "3281614b-02a1-4b5f-a3e7-ab27cb5b2683"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290721 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-httpd-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290771 4688 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290785 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290801 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.290815 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h597\" (UniqueName: \"kubernetes.io/projected/3281614b-02a1-4b5f-a3e7-ab27cb5b2683-kube-api-access-7h597\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.867141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64499996d4-ttr8f" event={"ID":"3281614b-02a1-4b5f-a3e7-ab27cb5b2683","Type":"ContainerDied","Data":"9671bd0246a6d0db61c2d009cb47e19f3c6efbea76f1e6633cfe137f99ce8509"} Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.867222 4688 scope.go:117] "RemoveContainer" containerID="4c2e1e343c264c38fb88512b51dcb2296ee7876e6d54a16baf3972d32cd7d8be" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.867504 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64499996d4-ttr8f" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.895549 4688 scope.go:117] "RemoveContainer" containerID="ab370381cc5ea5652d36adf9913981005d1fcd95b8fe553b81c6b36512c0ecd8" Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.920151 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:34:16 crc kubenswrapper[4688]: I0930 17:34:16.930472 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64499996d4-ttr8f"] Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.229492 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 30 17:34:17 crc kubenswrapper[4688]: E0930 17:34:17.230136 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-api" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.230162 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-api" Sep 30 17:34:17 crc kubenswrapper[4688]: E0930 17:34:17.230212 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-httpd" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.230240 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-httpd" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.230492 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-httpd" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.230515 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" containerName="neutron-api" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.232043 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.234532 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.234656 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-44h7j" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.239140 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.240589 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.314450 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.314860 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.314959 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqck\" (UniqueName: \"kubernetes.io/projected/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-kube-api-access-sdqck\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.315049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.416987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.417040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.417067 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqck\" (UniqueName: \"kubernetes.io/projected/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-kube-api-access-sdqck\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.417099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.418537 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.426107 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.426201 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.452779 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqck\" (UniqueName: \"kubernetes.io/projected/7ec99c41-9858-45f2-84b9-c3fd05deb7fd-kube-api-access-sdqck\") pod \"openstackclient\" (UID: \"7ec99c41-9858-45f2-84b9-c3fd05deb7fd\") " pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.466201 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3281614b-02a1-4b5f-a3e7-ab27cb5b2683" path="/var/lib/kubelet/pods/3281614b-02a1-4b5f-a3e7-ab27cb5b2683/volumes" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.569932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 17:34:17 crc kubenswrapper[4688]: I0930 17:34:17.928832 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.002421 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.002679 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="dnsmasq-dns" containerID="cri-o://b1e17c5b004f92b7e8f46c642db5f3a2141a3be319ec344d49fc5fcf0775b611" gracePeriod=10 Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.110305 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 17:34:18 crc kubenswrapper[4688]: W0930 17:34:18.149319 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec99c41_9858_45f2_84b9_c3fd05deb7fd.slice/crio-c802472af6b9e5336f2b7f53d8596cced4397c4061e33c94f555d55ad691175e WatchSource:0}: Error finding container c802472af6b9e5336f2b7f53d8596cced4397c4061e33c94f555d55ad691175e: Status 404 returned error can't find the container with id c802472af6b9e5336f2b7f53d8596cced4397c4061e33c94f555d55ad691175e Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.273373 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.345876 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.898455 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ec99c41-9858-45f2-84b9-c3fd05deb7fd","Type":"ContainerStarted","Data":"c802472af6b9e5336f2b7f53d8596cced4397c4061e33c94f555d55ad691175e"} Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.901255 4688 generic.go:334] "Generic (PLEG): container finished" podID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerID="b1e17c5b004f92b7e8f46c642db5f3a2141a3be319ec344d49fc5fcf0775b611" exitCode=0 Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.901514 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="cinder-scheduler" containerID="cri-o://253ec36853626ca1fa7a5cf65e2ca22d8d5ba792005107de40a08eca876e7cf9" gracePeriod=30 Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.901891 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="probe" containerID="cri-o://ea952bcf9eecedbc75f60297b97a8a8656045e4810e45dad5a68700ac282440b" gracePeriod=30 Sep 30 17:34:18 crc kubenswrapper[4688]: I0930 17:34:18.901925 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" event={"ID":"137a3eca-a79c-464a-abf5-f44887f43c5b","Type":"ContainerDied","Data":"b1e17c5b004f92b7e8f46c642db5f3a2141a3be319ec344d49fc5fcf0775b611"} Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.157089 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.265498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb\") pod \"137a3eca-a79c-464a-abf5-f44887f43c5b\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.265608 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config\") pod \"137a3eca-a79c-464a-abf5-f44887f43c5b\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.265642 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9bzf\" (UniqueName: \"kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf\") pod \"137a3eca-a79c-464a-abf5-f44887f43c5b\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.265707 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc\") pod \"137a3eca-a79c-464a-abf5-f44887f43c5b\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.265800 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb\") pod \"137a3eca-a79c-464a-abf5-f44887f43c5b\" (UID: \"137a3eca-a79c-464a-abf5-f44887f43c5b\") " Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.271761 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf" (OuterVolumeSpecName: "kube-api-access-r9bzf") pod "137a3eca-a79c-464a-abf5-f44887f43c5b" (UID: "137a3eca-a79c-464a-abf5-f44887f43c5b"). InnerVolumeSpecName "kube-api-access-r9bzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.324541 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "137a3eca-a79c-464a-abf5-f44887f43c5b" (UID: "137a3eca-a79c-464a-abf5-f44887f43c5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.327713 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "137a3eca-a79c-464a-abf5-f44887f43c5b" (UID: "137a3eca-a79c-464a-abf5-f44887f43c5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.335335 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "137a3eca-a79c-464a-abf5-f44887f43c5b" (UID: "137a3eca-a79c-464a-abf5-f44887f43c5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.370169 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.370217 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9bzf\" (UniqueName: \"kubernetes.io/projected/137a3eca-a79c-464a-abf5-f44887f43c5b-kube-api-access-r9bzf\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.370235 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.370248 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.398122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config" (OuterVolumeSpecName: "config") pod "137a3eca-a79c-464a-abf5-f44887f43c5b" (UID: "137a3eca-a79c-464a-abf5-f44887f43c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.472260 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/137a3eca-a79c-464a-abf5-f44887f43c5b-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.933631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" event={"ID":"137a3eca-a79c-464a-abf5-f44887f43c5b","Type":"ContainerDied","Data":"cd9629813a0fae7c041171086929fb8740499da3153d329cf44a83d17090c62c"} Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.933700 4688 scope.go:117] "RemoveContainer" containerID="b1e17c5b004f92b7e8f46c642db5f3a2141a3be319ec344d49fc5fcf0775b611" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.933871 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-g8zw6" Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.970962 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.987787 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-g8zw6"] Sep 30 17:34:19 crc kubenswrapper[4688]: I0930 17:34:19.993059 4688 scope.go:117] "RemoveContainer" containerID="37db38143a59d6c6247de30deb49c761203a23b38faf907b78d0a5d10ef3b749" Sep 30 17:34:20 crc kubenswrapper[4688]: I0930 17:34:20.134678 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:20 crc kubenswrapper[4688]: I0930 17:34:20.953721 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerID="ea952bcf9eecedbc75f60297b97a8a8656045e4810e45dad5a68700ac282440b" exitCode=0 Sep 30 17:34:20 crc kubenswrapper[4688]: I0930 17:34:20.953808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerDied","Data":"ea952bcf9eecedbc75f60297b97a8a8656045e4810e45dad5a68700ac282440b"} Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.076040 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f77664db-t4vhn" Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.195646 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.196007 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" containerID="cri-o://bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b" gracePeriod=30 Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.196648 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" containerID="cri-o://4802b48bdd9da15bbafc48cff8ce4909f097635ff4250b8d5b4030af83be3c40" gracePeriod=30 Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.337335 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Sep 30 17:34:21 crc kubenswrapper[4688]: E0930 17:34:21.355008 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e03c74e_fa0b_4d3c_94c9_9434b1266188.slice/crio-bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b.scope\": RecentStats: unable to find data in memory cache]" Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.449024 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" path="/var/lib/kubelet/pods/137a3eca-a79c-464a-abf5-f44887f43c5b/volumes" Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.972936 4688 generic.go:334] "Generic (PLEG): container finished" podID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerID="bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b" exitCode=143 Sep 30 17:34:21 crc kubenswrapper[4688]: I0930 17:34:21.973003 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerDied","Data":"bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b"} Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.291434 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-647757d8c5-6pzkp"] Sep 30 17:34:22 crc kubenswrapper[4688]: E0930 17:34:22.292193 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="init" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.292261 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="init" Sep 30 17:34:22 crc kubenswrapper[4688]: E0930 17:34:22.292351 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="dnsmasq-dns" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.292403 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="dnsmasq-dns" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.292695 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="137a3eca-a79c-464a-abf5-f44887f43c5b" containerName="dnsmasq-dns" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.293824 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.296051 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.296067 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.297540 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.329491 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-647757d8c5-6pzkp"] Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.397923 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472258 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-public-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-combined-ca-bundle\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472801 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-log-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472847 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-internal-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472885 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-config-data\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472946 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-run-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.472967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b458m\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-kube-api-access-b458m\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.548115 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.548220 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.573338 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-config-data\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574544 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574611 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-run-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574644 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b458m\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-kube-api-access-b458m\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574731 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-public-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: E0930 17:34:22.574755 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574782 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-combined-ca-bundle\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-log-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.574914 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-internal-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: E0930 17:34:22.574792 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:22 crc kubenswrapper[4688]: E0930 17:34:22.575899 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:23.075861389 +0000 UTC m=+1120.371298937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.576352 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-central-agent" containerID="cri-o://3a6605ee0f7aec514aa3fb9b73e106e2e56d3f1b2b8ee1276f94d9ab9afde87a" gracePeriod=30 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.576513 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" containerID="cri-o://6985ee228e8d527447f81d61f1575a514d7f6d182a05ff4b6e03428078b5c10b" gracePeriod=30 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.576631 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="sg-core" containerID="cri-o://b6d590974949056296f6d81ffbdc58f9150be02e76174f87c2f3ef4004eac149" gracePeriod=30 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.576688 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-notification-agent" containerID="cri-o://f467955e8e74e51901629bb1bdcaaecaf5621dfc2d42c8aa59e5d8348a71a97b" gracePeriod=30 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.682740 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": read tcp 10.217.0.2:48356->10.217.0.160:3000: read: connection reset by peer" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.871459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-run-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.872884 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-log-httpd\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.873535 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-config-data\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.874105 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-internal-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.878587 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b458m\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-kube-api-access-b458m\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.881542 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-combined-ca-bundle\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.885726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-public-tls-certs\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.991438 4688 generic.go:334] "Generic (PLEG): container finished" podID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerID="6985ee228e8d527447f81d61f1575a514d7f6d182a05ff4b6e03428078b5c10b" exitCode=0 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.991493 4688 generic.go:334] "Generic (PLEG): container finished" podID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerID="b6d590974949056296f6d81ffbdc58f9150be02e76174f87c2f3ef4004eac149" exitCode=2 Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.991526 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerDied","Data":"6985ee228e8d527447f81d61f1575a514d7f6d182a05ff4b6e03428078b5c10b"} Sep 30 17:34:22 crc kubenswrapper[4688]: I0930 17:34:22.991586 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerDied","Data":"b6d590974949056296f6d81ffbdc58f9150be02e76174f87c2f3ef4004eac149"} Sep 30 17:34:23 crc kubenswrapper[4688]: I0930 17:34:23.088428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:23 crc kubenswrapper[4688]: E0930 17:34:23.088624 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:23 crc kubenswrapper[4688]: E0930 17:34:23.088648 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:23 crc kubenswrapper[4688]: E0930 17:34:23.088712 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:24.088690893 +0000 UTC m=+1121.384128451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:24 crc kubenswrapper[4688]: I0930 17:34:24.119102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:24 crc kubenswrapper[4688]: E0930 17:34:24.119689 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:24 crc kubenswrapper[4688]: E0930 17:34:24.119730 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:24 crc kubenswrapper[4688]: E0930 17:34:24.119798 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:26.119776081 +0000 UTC m=+1123.415213639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:24 crc kubenswrapper[4688]: I0930 17:34:24.370527 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:60276->10.217.0.164:9311: read: connection reset by peer" Sep 30 17:34:24 crc kubenswrapper[4688]: I0930 17:34:24.371265 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:60286->10.217.0.164:9311: read: connection reset by peer" Sep 30 17:34:25 crc kubenswrapper[4688]: I0930 17:34:25.049523 4688 generic.go:334] "Generic (PLEG): container finished" podID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerID="4802b48bdd9da15bbafc48cff8ce4909f097635ff4250b8d5b4030af83be3c40" exitCode=0 Sep 30 17:34:25 crc kubenswrapper[4688]: I0930 17:34:25.049640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerDied","Data":"4802b48bdd9da15bbafc48cff8ce4909f097635ff4250b8d5b4030af83be3c40"} Sep 30 17:34:25 crc kubenswrapper[4688]: I0930 17:34:25.053098 4688 generic.go:334] "Generic (PLEG): container finished" podID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerID="3a6605ee0f7aec514aa3fb9b73e106e2e56d3f1b2b8ee1276f94d9ab9afde87a" exitCode=0 Sep 30 17:34:25 crc kubenswrapper[4688]: I0930 17:34:25.053159 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerDied","Data":"3a6605ee0f7aec514aa3fb9b73e106e2e56d3f1b2b8ee1276f94d9ab9afde87a"} Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.072006 4688 generic.go:334] "Generic (PLEG): container finished" podID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerID="f467955e8e74e51901629bb1bdcaaecaf5621dfc2d42c8aa59e5d8348a71a97b" exitCode=0 Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.072288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerDied","Data":"f467955e8e74e51901629bb1bdcaaecaf5621dfc2d42c8aa59e5d8348a71a97b"} Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.075221 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerID="253ec36853626ca1fa7a5cf65e2ca22d8d5ba792005107de40a08eca876e7cf9" exitCode=0 Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.075256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerDied","Data":"253ec36853626ca1fa7a5cf65e2ca22d8d5ba792005107de40a08eca876e7cf9"} Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.170017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:26 crc kubenswrapper[4688]: E0930 17:34:26.170277 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:26 crc kubenswrapper[4688]: E0930 17:34:26.170316 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:26 crc kubenswrapper[4688]: E0930 17:34:26.170408 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:30.170382309 +0000 UTC m=+1127.465819867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.998218 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Sep 30 17:34:26 crc kubenswrapper[4688]: I0930 17:34:26.998402 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5584f7f6cd-rpz5j" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Sep 30 17:34:28 crc kubenswrapper[4688]: I0930 17:34:28.124045 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.167:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:34:30 crc kubenswrapper[4688]: I0930 17:34:30.193307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:30 crc kubenswrapper[4688]: E0930 17:34:30.193486 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:30 crc kubenswrapper[4688]: E0930 17:34:30.193768 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:30 crc kubenswrapper[4688]: E0930 17:34:30.193838 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:38.193812885 +0000 UTC m=+1135.489250443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.062235 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.072408 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.151092 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.168299 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fa433d78-89e4-43ac-aea2-0ad75a3a916a","Type":"ContainerDied","Data":"a4c98ece9cc8936809b1ef8b53e60f13ae7ba8793b78b01d8e98e1b03ce55a89"} Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.168368 4688 scope.go:117] "RemoveContainer" containerID="ea952bcf9eecedbc75f60297b97a8a8656045e4810e45dad5a68700ac282440b" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.168529 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.206021 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d6ae60e-e1d8-4807-8061-062b2fde551d","Type":"ContainerDied","Data":"a97fa7da64f1af385774c8fab307c8d6adce6621a08e03c87de99cb924cd7b21"} Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.206160 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.214714 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs\") pod \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.214765 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99w69\" (UniqueName: \"kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69\") pod \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.214885 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data\") pod \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.214996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tspbj\" (UniqueName: \"kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215080 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle\") pod \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215213 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215276 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215285 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs" (OuterVolumeSpecName: "logs") pod "1e03c74e-fa0b-4d3c-94c9-9434b1266188" (UID: "1e03c74e-fa0b-4d3c-94c9-9434b1266188"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215331 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle\") pod \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\" (UID: \"fa433d78-89e4-43ac-aea2-0ad75a3a916a\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.215363 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom\") pod \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\" (UID: \"1e03c74e-fa0b-4d3c-94c9-9434b1266188\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.216059 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e03c74e-fa0b-4d3c-94c9-9434b1266188-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.217361 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.220762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5584f7f6cd-rpz5j" event={"ID":"1e03c74e-fa0b-4d3c-94c9-9434b1266188","Type":"ContainerDied","Data":"5c5ff6d29309f8103cdcca8c3f4e87770ac104da8853dabef0a3bdf89aa5e06d"} Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.220865 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5584f7f6cd-rpz5j" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.229734 4688 scope.go:117] "RemoveContainer" containerID="253ec36853626ca1fa7a5cf65e2ca22d8d5ba792005107de40a08eca876e7cf9" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.231185 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e03c74e-fa0b-4d3c-94c9-9434b1266188" (UID: "1e03c74e-fa0b-4d3c-94c9-9434b1266188"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.232877 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69" (OuterVolumeSpecName: "kube-api-access-99w69") pod "1e03c74e-fa0b-4d3c-94c9-9434b1266188" (UID: "1e03c74e-fa0b-4d3c-94c9-9434b1266188"). InnerVolumeSpecName "kube-api-access-99w69". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.233298 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj" (OuterVolumeSpecName: "kube-api-access-tspbj") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "kube-api-access-tspbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.235786 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.236465 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts" (OuterVolumeSpecName: "scripts") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.253218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e03c74e-fa0b-4d3c-94c9-9434b1266188" (UID: "1e03c74e-fa0b-4d3c-94c9-9434b1266188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.284770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data" (OuterVolumeSpecName: "config-data") pod "1e03c74e-fa0b-4d3c-94c9-9434b1266188" (UID: "1e03c74e-fa0b-4d3c-94c9-9434b1266188"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.308343 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.316975 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317225 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317273 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjhcv\" (UniqueName: \"kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317474 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317465 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.317533 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd\") pod \"7d6ae60e-e1d8-4807-8061-062b2fde551d\" (UID: \"7d6ae60e-e1d8-4807-8061-062b2fde551d\") " Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318395 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318416 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318609 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fa433d78-89e4-43ac-aea2-0ad75a3a916a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318623 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318631 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318641 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318651 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99w69\" (UniqueName: \"kubernetes.io/projected/1e03c74e-fa0b-4d3c-94c9-9434b1266188-kube-api-access-99w69\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318665 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318678 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tspbj\" (UniqueName: \"kubernetes.io/projected/fa433d78-89e4-43ac-aea2-0ad75a3a916a-kube-api-access-tspbj\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318687 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e03c74e-fa0b-4d3c-94c9-9434b1266188-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.318696 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.326711 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts" (OuterVolumeSpecName: "scripts") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.326911 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv" (OuterVolumeSpecName: "kube-api-access-jjhcv") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "kube-api-access-jjhcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.340139 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6dd7b7d6d-nfbnw" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.364812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.377783 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data" (OuterVolumeSpecName: "config-data") pod "fa433d78-89e4-43ac-aea2-0ad75a3a916a" (UID: "fa433d78-89e4-43ac-aea2-0ad75a3a916a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.420488 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.420793 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa433d78-89e4-43ac-aea2-0ad75a3a916a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.420880 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjhcv\" (UniqueName: \"kubernetes.io/projected/7d6ae60e-e1d8-4807-8061-062b2fde551d-kube-api-access-jjhcv\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.420966 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.421035 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d6ae60e-e1d8-4807-8061-062b2fde551d-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.438259 4688 scope.go:117] "RemoveContainer" containerID="6985ee228e8d527447f81d61f1575a514d7f6d182a05ff4b6e03428078b5c10b" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.447494 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.467028 4688 scope.go:117] "RemoveContainer" containerID="b6d590974949056296f6d81ffbdc58f9150be02e76174f87c2f3ef4004eac149" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.502376 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data" (OuterVolumeSpecName: "config-data") pod "7d6ae60e-e1d8-4807-8061-062b2fde551d" (UID: "7d6ae60e-e1d8-4807-8061-062b2fde551d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.521007 4688 scope.go:117] "RemoveContainer" containerID="f467955e8e74e51901629bb1bdcaaecaf5621dfc2d42c8aa59e5d8348a71a97b" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.526700 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.526745 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6ae60e-e1d8-4807-8061-062b2fde551d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.542373 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.561883 4688 scope.go:117] "RemoveContainer" containerID="3a6605ee0f7aec514aa3fb9b73e106e2e56d3f1b2b8ee1276f94d9ab9afde87a" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.568859 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.599887 4688 scope.go:117] "RemoveContainer" containerID="4802b48bdd9da15bbafc48cff8ce4909f097635ff4250b8d5b4030af83be3c40" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.600184 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.600914 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-notification-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.600935 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-notification-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.600946 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.600953 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.600973 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.600983 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.600995 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="sg-core" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601001 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="sg-core" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.601014 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="probe" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601020 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="probe" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.601042 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-central-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601048 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-central-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.601058 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="cinder-scheduler" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601064 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="cinder-scheduler" Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.601081 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601088 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601303 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="sg-core" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601316 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api-log" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601331 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-notification-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601346 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" containerName="barbican-api" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601360 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="ceilometer-central-agent" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601374 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="probe" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601385 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.601395 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" containerName="cinder-scheduler" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.603706 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.609605 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.619419 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.638369 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.645978 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5584f7f6cd-rpz5j"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.654011 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.660339 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.669973 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.673070 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.675799 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.675998 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.689882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:31 crc kubenswrapper[4688]: E0930 17:34:31.714189 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e03c74e_fa0b_4d3c_94c9_9434b1266188.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e03c74e_fa0b_4d3c_94c9_9434b1266188.slice/crio-5c5ff6d29309f8103cdcca8c3f4e87770ac104da8853dabef0a3bdf89aa5e06d\": RecentStats: unable to find data in memory cache]" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.730509 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.730853 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.731029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2kl\" (UniqueName: \"kubernetes.io/projected/8344d27c-987c-4c41-8491-9856f5b93c3f-kube-api-access-vj2kl\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.731167 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.731258 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8344d27c-987c-4c41-8491-9856f5b93c3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.731466 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.832945 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833016 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833140 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwglm\" (UniqueName: \"kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833186 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2kl\" (UniqueName: \"kubernetes.io/projected/8344d27c-987c-4c41-8491-9856f5b93c3f-kube-api-access-vj2kl\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833265 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833289 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833319 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8344d27c-987c-4c41-8491-9856f5b93c3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833357 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.833858 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8344d27c-987c-4c41-8491-9856f5b93c3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.839430 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.839758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.839863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.840323 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8344d27c-987c-4c41-8491-9856f5b93c3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.857863 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2kl\" (UniqueName: \"kubernetes.io/projected/8344d27c-987c-4c41-8491-9856f5b93c3f-kube-api-access-vj2kl\") pod \"cinder-scheduler-0\" (UID: \"8344d27c-987c-4c41-8491-9856f5b93c3f\") " pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.931334 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.935816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.936487 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwglm\" (UniqueName: \"kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.936635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.936791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.936968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.937204 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.937231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.937898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.938035 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.939851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.940497 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.942427 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.942937 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.956255 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwglm\" (UniqueName: \"kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm\") pod \"ceilometer-0\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " pod="openstack/ceilometer-0" Sep 30 17:34:31 crc kubenswrapper[4688]: I0930 17:34:31.990075 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:32 crc kubenswrapper[4688]: I0930 17:34:32.946946 4688 scope.go:117] "RemoveContainer" containerID="bb7f09e9572330e9007b30f378a5f094037f5ea7a6562c94b7d61bfe1a44d96b" Sep 30 17:34:33 crc kubenswrapper[4688]: I0930 17:34:33.443098 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e03c74e-fa0b-4d3c-94c9-9434b1266188" path="/var/lib/kubelet/pods/1e03c74e-fa0b-4d3c-94c9-9434b1266188/volumes" Sep 30 17:34:33 crc kubenswrapper[4688]: I0930 17:34:33.444446 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" path="/var/lib/kubelet/pods/7d6ae60e-e1d8-4807-8061-062b2fde551d/volumes" Sep 30 17:34:33 crc kubenswrapper[4688]: I0930 17:34:33.445137 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa433d78-89e4-43ac-aea2-0ad75a3a916a" path="/var/lib/kubelet/pods/fa433d78-89e4-43ac-aea2-0ad75a3a916a/volumes" Sep 30 17:34:33 crc kubenswrapper[4688]: I0930 17:34:33.530938 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 17:34:33 crc kubenswrapper[4688]: I0930 17:34:33.605650 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.062459 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.262250 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerStarted","Data":"aabb9a16d19af376c687236e3727d17402637ae03ea8c71282f3625298dd061b"} Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.264811 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7ec99c41-9858-45f2-84b9-c3fd05deb7fd","Type":"ContainerStarted","Data":"2c93f0b40012cbc1e5c3f2fd28d2660d961ec3f16c195edf689589a9d3fb926e"} Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.270030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8344d27c-987c-4c41-8491-9856f5b93c3f","Type":"ContainerStarted","Data":"7ade9d696a679e1bb65cf50114e14630cbe038f605a0e95ba1ef6ef8de9459ef"} Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.300264 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.464628208 podStartE2EDuration="17.300236964s" podCreationTimestamp="2025-09-30 17:34:17 +0000 UTC" firstStartedPulling="2025-09-30 17:34:18.156677195 +0000 UTC m=+1115.452114753" lastFinishedPulling="2025-09-30 17:34:32.992285951 +0000 UTC m=+1130.287723509" observedRunningTime="2025-09-30 17:34:34.284864237 +0000 UTC m=+1131.580301805" watchObservedRunningTime="2025-09-30 17:34:34.300236964 +0000 UTC m=+1131.595674512" Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.817875 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gx862"] Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.819457 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.836058 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gx862"] Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.915060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899k5\" (UniqueName: \"kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5\") pod \"nova-api-db-create-gx862\" (UID: \"d4b5e55d-c196-48cd-9ea7-c357ef7a693a\") " pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.964638 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d4wgr"] Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.965939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:34 crc kubenswrapper[4688]: I0930 17:34:34.985889 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d4wgr"] Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.016490 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899k5\" (UniqueName: \"kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5\") pod \"nova-api-db-create-gx862\" (UID: \"d4b5e55d-c196-48cd-9ea7-c357ef7a693a\") " pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.056834 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899k5\" (UniqueName: \"kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5\") pod \"nova-api-db-create-gx862\" (UID: \"d4b5e55d-c196-48cd-9ea7-c357ef7a693a\") " pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.067865 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-wp4x8"] Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.070733 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.105669 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wp4x8"] Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.118741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2jj\" (UniqueName: \"kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj\") pod \"nova-cell0-db-create-d4wgr\" (UID: \"644e9c91-83cf-485d-bddb-700b3d5473e0\") " pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.150336 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.222415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b87vv\" (UniqueName: \"kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv\") pod \"nova-cell1-db-create-wp4x8\" (UID: \"aa912d04-6534-42ec-9492-db47f7ac0285\") " pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.222660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st2jj\" (UniqueName: \"kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj\") pod \"nova-cell0-db-create-d4wgr\" (UID: \"644e9c91-83cf-485d-bddb-700b3d5473e0\") " pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.261386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2jj\" (UniqueName: \"kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj\") pod \"nova-cell0-db-create-d4wgr\" (UID: \"644e9c91-83cf-485d-bddb-700b3d5473e0\") " pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.290080 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.295914 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerStarted","Data":"e4c9325653ef7264455f2a1e7b6389ab045d5d97e98c45329514e2f2a181108b"} Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.297621 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8344d27c-987c-4c41-8491-9856f5b93c3f","Type":"ContainerStarted","Data":"9ff765c32407c7c8d5c16cc768d59e0c43d36c8ad31dd8a2421e499858e5ad4f"} Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.331513 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b87vv\" (UniqueName: \"kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv\") pod \"nova-cell1-db-create-wp4x8\" (UID: \"aa912d04-6534-42ec-9492-db47f7ac0285\") " pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.373492 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b87vv\" (UniqueName: \"kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv\") pod \"nova-cell1-db-create-wp4x8\" (UID: \"aa912d04-6534-42ec-9492-db47f7ac0285\") " pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.451980 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.757844 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gx862"] Sep 30 17:34:35 crc kubenswrapper[4688]: W0930 17:34:35.930121 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644e9c91_83cf_485d_bddb_700b3d5473e0.slice/crio-1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4 WatchSource:0}: Error finding container 1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4: Status 404 returned error can't find the container with id 1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4 Sep 30 17:34:35 crc kubenswrapper[4688]: I0930 17:34:35.933248 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d4wgr"] Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.143149 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wp4x8"] Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.310012 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gx862" event={"ID":"d4b5e55d-c196-48cd-9ea7-c357ef7a693a","Type":"ContainerStarted","Data":"fe082b47d0653f9f2575c32f07495e897834475fb29194bf18e5c742a3668b73"} Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.319023 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp4x8" event={"ID":"aa912d04-6534-42ec-9492-db47f7ac0285","Type":"ContainerStarted","Data":"3b016f34cb9e2ec2be895e9fab979947fac9119a6ba2108f2f755b2a51908864"} Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.323384 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8344d27c-987c-4c41-8491-9856f5b93c3f","Type":"ContainerStarted","Data":"fca8c413f9248ed095d38ae331d6f1db13b071036804373bf1484f5ada02d946"} Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.328694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d4wgr" event={"ID":"644e9c91-83cf-485d-bddb-700b3d5473e0","Type":"ContainerStarted","Data":"1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4"} Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.378845 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.378817955 podStartE2EDuration="5.378817955s" podCreationTimestamp="2025-09-30 17:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:36.347447015 +0000 UTC m=+1133.642884583" watchObservedRunningTime="2025-09-30 17:34:36.378817955 +0000 UTC m=+1133.674255513" Sep 30 17:34:36 crc kubenswrapper[4688]: I0930 17:34:36.933060 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.343985 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4b5e55d-c196-48cd-9ea7-c357ef7a693a" containerID="7b17f1e32e5232e4129b198b920f8420ef7d12d4ab165c21c9aaee70c321b01b" exitCode=0 Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.344103 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gx862" event={"ID":"d4b5e55d-c196-48cd-9ea7-c357ef7a693a","Type":"ContainerDied","Data":"7b17f1e32e5232e4129b198b920f8420ef7d12d4ab165c21c9aaee70c321b01b"} Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.348368 4688 generic.go:334] "Generic (PLEG): container finished" podID="aa912d04-6534-42ec-9492-db47f7ac0285" containerID="cc8625a8863c3f0ede2fdfa14ef9276992da2490d4956c63885946b1e05da68a" exitCode=0 Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.348456 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp4x8" event={"ID":"aa912d04-6534-42ec-9492-db47f7ac0285","Type":"ContainerDied","Data":"cc8625a8863c3f0ede2fdfa14ef9276992da2490d4956c63885946b1e05da68a"} Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.356270 4688 generic.go:334] "Generic (PLEG): container finished" podID="644e9c91-83cf-485d-bddb-700b3d5473e0" containerID="df10db116da77b54eab5bdfa6b8fb53455b8c758f859bc8577266cff7e3e6001" exitCode=0 Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.356320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d4wgr" event={"ID":"644e9c91-83cf-485d-bddb-700b3d5473e0","Type":"ContainerDied","Data":"df10db116da77b54eab5bdfa6b8fb53455b8c758f859bc8577266cff7e3e6001"} Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.363933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerStarted","Data":"d85899ad6aac8091a73c55948ddb1ef755e2d1c3a2c140d1c5fe98f2e97eece2"} Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.836769 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.897831 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898220 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898415 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pfn8\" (UniqueName: \"kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.898754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts\") pod \"7a40d4d2-4534-4959-b23a-07bd542d56b7\" (UID: \"7a40d4d2-4534-4959-b23a-07bd542d56b7\") " Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.905186 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs" (OuterVolumeSpecName: "logs") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.905858 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8" (OuterVolumeSpecName: "kube-api-access-7pfn8") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "kube-api-access-7pfn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.911847 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.928806 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data" (OuterVolumeSpecName: "config-data") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.941400 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts" (OuterVolumeSpecName: "scripts") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.952961 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:37 crc kubenswrapper[4688]: I0930 17:34:37.956295 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7a40d4d2-4534-4959-b23a-07bd542d56b7" (UID: "7a40d4d2-4534-4959-b23a-07bd542d56b7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001432 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001462 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a40d4d2-4534-4959-b23a-07bd542d56b7-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001472 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001481 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001491 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a40d4d2-4534-4959-b23a-07bd542d56b7-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001503 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pfn8\" (UniqueName: \"kubernetes.io/projected/7a40d4d2-4534-4959-b23a-07bd542d56b7-kube-api-access-7pfn8\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.001524 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7a40d4d2-4534-4959-b23a-07bd542d56b7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.206175 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:38 crc kubenswrapper[4688]: E0930 17:34:38.206380 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:38 crc kubenswrapper[4688]: E0930 17:34:38.206397 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:38 crc kubenswrapper[4688]: E0930 17:34:38.206451 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:34:54.206433469 +0000 UTC m=+1151.501871027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.374601 4688 generic.go:334] "Generic (PLEG): container finished" podID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerID="6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a" exitCode=137 Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.374696 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dd7b7d6d-nfbnw" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.374688 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerDied","Data":"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a"} Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.377803 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dd7b7d6d-nfbnw" event={"ID":"7a40d4d2-4534-4959-b23a-07bd542d56b7","Type":"ContainerDied","Data":"29384fed607cf520fc9adf3785a83de87cae189fd175a2801f568c255ad05475"} Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.377842 4688 scope.go:117] "RemoveContainer" containerID="0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3" Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.399147 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerStarted","Data":"4e9793b3f34067ca1d1930517d28508bfe48370d33a6906b9819b5d81b3f6d14"} Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.471379 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.484774 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6dd7b7d6d-nfbnw"] Sep 30 17:34:38 crc kubenswrapper[4688]: I0930 17:34:38.889755 4688 scope.go:117] "RemoveContainer" containerID="6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.029785 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.059805 4688 scope.go:117] "RemoveContainer" containerID="0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3" Sep 30 17:34:39 crc kubenswrapper[4688]: E0930 17:34:39.062697 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3\": container with ID starting with 0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3 not found: ID does not exist" containerID="0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.062747 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3"} err="failed to get container status \"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3\": rpc error: code = NotFound desc = could not find container \"0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3\": container with ID starting with 0981f1d13a5d37fc7977d83d5650df59e7ed50f80cf017bce7e2534ea40e0de3 not found: ID does not exist" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.062779 4688 scope.go:117] "RemoveContainer" containerID="6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a" Sep 30 17:34:39 crc kubenswrapper[4688]: E0930 17:34:39.074040 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a\": container with ID starting with 6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a not found: ID does not exist" containerID="6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.074096 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a"} err="failed to get container status \"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a\": rpc error: code = NotFound desc = could not find container \"6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a\": container with ID starting with 6c04db995fb2d45e422066f5236123d4645d175c8f83ec07a1dbe0d8d7ba7a2a not found: ID does not exist" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.148408 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-899k5\" (UniqueName: \"kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5\") pod \"d4b5e55d-c196-48cd-9ea7-c357ef7a693a\" (UID: \"d4b5e55d-c196-48cd-9ea7-c357ef7a693a\") " Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.164283 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5" (OuterVolumeSpecName: "kube-api-access-899k5") pod "d4b5e55d-c196-48cd-9ea7-c357ef7a693a" (UID: "d4b5e55d-c196-48cd-9ea7-c357ef7a693a"). InnerVolumeSpecName "kube-api-access-899k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.231182 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.236350 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.254950 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-899k5\" (UniqueName: \"kubernetes.io/projected/d4b5e55d-c196-48cd-9ea7-c357ef7a693a-kube-api-access-899k5\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.356888 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b87vv\" (UniqueName: \"kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv\") pod \"aa912d04-6534-42ec-9492-db47f7ac0285\" (UID: \"aa912d04-6534-42ec-9492-db47f7ac0285\") " Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.356970 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2jj\" (UniqueName: \"kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj\") pod \"644e9c91-83cf-485d-bddb-700b3d5473e0\" (UID: \"644e9c91-83cf-485d-bddb-700b3d5473e0\") " Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.361767 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv" (OuterVolumeSpecName: "kube-api-access-b87vv") pod "aa912d04-6534-42ec-9492-db47f7ac0285" (UID: "aa912d04-6534-42ec-9492-db47f7ac0285"). InnerVolumeSpecName "kube-api-access-b87vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.361931 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj" (OuterVolumeSpecName: "kube-api-access-st2jj") pod "644e9c91-83cf-485d-bddb-700b3d5473e0" (UID: "644e9c91-83cf-485d-bddb-700b3d5473e0"). InnerVolumeSpecName "kube-api-access-st2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.410841 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d4wgr" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.410850 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d4wgr" event={"ID":"644e9c91-83cf-485d-bddb-700b3d5473e0","Type":"ContainerDied","Data":"1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4"} Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.410994 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cb692583dea82d976afac365abf4ffaa428defe69ce45334a842566e2efdef4" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.413716 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gx862" event={"ID":"d4b5e55d-c196-48cd-9ea7-c357ef7a693a","Type":"ContainerDied","Data":"fe082b47d0653f9f2575c32f07495e897834475fb29194bf18e5c742a3668b73"} Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.413739 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe082b47d0653f9f2575c32f07495e897834475fb29194bf18e5c742a3668b73" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.413804 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gx862" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.416762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp4x8" event={"ID":"aa912d04-6534-42ec-9492-db47f7ac0285","Type":"ContainerDied","Data":"3b016f34cb9e2ec2be895e9fab979947fac9119a6ba2108f2f755b2a51908864"} Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.416837 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b016f34cb9e2ec2be895e9fab979947fac9119a6ba2108f2f755b2a51908864" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.416888 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp4x8" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.443108 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" path="/var/lib/kubelet/pods/7a40d4d2-4534-4959-b23a-07bd542d56b7/volumes" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.459171 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b87vv\" (UniqueName: \"kubernetes.io/projected/aa912d04-6534-42ec-9492-db47f7ac0285-kube-api-access-b87vv\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:39 crc kubenswrapper[4688]: I0930 17:34:39.459223 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st2jj\" (UniqueName: \"kubernetes.io/projected/644e9c91-83cf-485d-bddb-700b3d5473e0-kube-api-access-st2jj\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.095072 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173553 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173694 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173765 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89c79\" (UniqueName: \"kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.173886 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data\") pod \"8f942e26-6864-4a49-b16e-aa21b1df3d80\" (UID: \"8f942e26-6864-4a49-b16e-aa21b1df3d80\") " Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.174177 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.174393 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f942e26-6864-4a49-b16e-aa21b1df3d80-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.174587 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs" (OuterVolumeSpecName: "logs") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.181423 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.182151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts" (OuterVolumeSpecName: "scripts") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.202959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79" (OuterVolumeSpecName: "kube-api-access-89c79") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "kube-api-access-89c79". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.212779 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.256695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data" (OuterVolumeSpecName: "config-data") pod "8f942e26-6864-4a49-b16e-aa21b1df3d80" (UID: "8f942e26-6864-4a49-b16e-aa21b1df3d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.275949 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.275988 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.276001 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.276010 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f942e26-6864-4a49-b16e-aa21b1df3d80-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.276022 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89c79\" (UniqueName: \"kubernetes.io/projected/8f942e26-6864-4a49-b16e-aa21b1df3d80-kube-api-access-89c79\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.276031 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f942e26-6864-4a49-b16e-aa21b1df3d80-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.427943 4688 generic.go:334] "Generic (PLEG): container finished" podID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerID="765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e" exitCode=137 Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.428001 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerDied","Data":"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e"} Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.428042 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f942e26-6864-4a49-b16e-aa21b1df3d80","Type":"ContainerDied","Data":"675848dd3f48491ae92fa613fc5d6f26955b0c1d42a926e65e672aa1a5b095a4"} Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.428066 4688 scope.go:117] "RemoveContainer" containerID="765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.428233 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.464741 4688 scope.go:117] "RemoveContainer" containerID="3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.487261 4688 scope.go:117] "RemoveContainer" containerID="765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.487808 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e\": container with ID starting with 765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e not found: ID does not exist" containerID="765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.487863 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e"} err="failed to get container status \"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e\": rpc error: code = NotFound desc = could not find container \"765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e\": container with ID starting with 765d6236a294e221b077b1354702a9a003602e76f4099c4500f409207e72a23e not found: ID does not exist" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.487900 4688 scope.go:117] "RemoveContainer" containerID="3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.488228 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203\": container with ID starting with 3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203 not found: ID does not exist" containerID="3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.488357 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203"} err="failed to get container status \"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203\": rpc error: code = NotFound desc = could not find container \"3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203\": container with ID starting with 3f4c599e2fb9f8b2b82c25a07db41d40a736bf0f54b983610f1515319993e203 not found: ID does not exist" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.498961 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.518226 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.528334 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.528990 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b5e55d-c196-48cd-9ea7-c357ef7a693a" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529040 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b5e55d-c196-48cd-9ea7-c357ef7a693a" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529054 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529061 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529080 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa912d04-6534-42ec-9492-db47f7ac0285" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529089 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa912d04-6534-42ec-9492-db47f7ac0285" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529104 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api-log" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529111 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api-log" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529126 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529133 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529159 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon-log" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529168 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon-log" Sep 30 17:34:40 crc kubenswrapper[4688]: E0930 17:34:40.529179 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644e9c91-83cf-485d-bddb-700b3d5473e0" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.529187 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="644e9c91-83cf-485d-bddb-700b3d5473e0" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530428 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="644e9c91-83cf-485d-bddb-700b3d5473e0" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530461 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa912d04-6534-42ec-9492-db47f7ac0285" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530483 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530496 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api-log" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530506 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" containerName="cinder-api" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530519 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b5e55d-c196-48cd-9ea7-c357ef7a693a" containerName="mariadb-database-create" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.530533 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a40d4d2-4534-4959-b23a-07bd542d56b7" containerName="horizon-log" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.531959 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.536018 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.536113 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.536323 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.540483 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.582320 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.582607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/1527bf09-6ef0-4fda-a749-9f6440f17159-kube-api-access-h7jgf\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.582759 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.582862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.583091 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data-custom\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.583280 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1527bf09-6ef0-4fda-a749-9f6440f17159-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.583411 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-scripts\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.583533 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1527bf09-6ef0-4fda-a749-9f6440f17159-logs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.583694 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686172 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686256 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data-custom\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1527bf09-6ef0-4fda-a749-9f6440f17159-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686357 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-scripts\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1527bf09-6ef0-4fda-a749-9f6440f17159-logs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1527bf09-6ef0-4fda-a749-9f6440f17159-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686498 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.686766 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/1527bf09-6ef0-4fda-a749-9f6440f17159-kube-api-access-h7jgf\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.687403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1527bf09-6ef0-4fda-a749-9f6440f17159-logs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.692146 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.692928 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data-custom\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.697283 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-scripts\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.697655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.698310 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-config-data\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.701408 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1527bf09-6ef0-4fda-a749-9f6440f17159-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.708307 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jgf\" (UniqueName: \"kubernetes.io/projected/1527bf09-6ef0-4fda-a749-9f6440f17159-kube-api-access-h7jgf\") pod \"cinder-api-0\" (UID: \"1527bf09-6ef0-4fda-a749-9f6440f17159\") " pod="openstack/cinder-api-0" Sep 30 17:34:40 crc kubenswrapper[4688]: I0930 17:34:40.863249 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.360406 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.471048 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f942e26-6864-4a49-b16e-aa21b1df3d80" path="/var/lib/kubelet/pods/8f942e26-6864-4a49-b16e-aa21b1df3d80/volumes" Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.475336 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerStarted","Data":"58ca7c9778c61a2b385cd52b87375d924df3b88af67d838ba6c7f265aaff2ee2"} Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.476587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.476731 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="proxy-httpd" containerID="cri-o://58ca7c9778c61a2b385cd52b87375d924df3b88af67d838ba6c7f265aaff2ee2" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.476917 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="sg-core" containerID="cri-o://4e9793b3f34067ca1d1930517d28508bfe48370d33a6906b9819b5d81b3f6d14" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.477027 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-notification-agent" containerID="cri-o://d85899ad6aac8091a73c55948ddb1ef755e2d1c3a2c140d1c5fe98f2e97eece2" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.492686 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-central-agent" containerID="cri-o://e4c9325653ef7264455f2a1e7b6389ab045d5d97e98c45329514e2f2a181108b" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.502777 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.503188 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-log" containerID="cri-o://62934cb518d898fe7bb2900891a3a2743470ba347554470984cbf411ed3e0443" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.503614 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-httpd" containerID="cri-o://6d21aaaa28eb918312e391e7a5819cd1bc060be08e1f0d91b17b81b40ac19d2b" gracePeriod=30 Sep 30 17:34:41 crc kubenswrapper[4688]: I0930 17:34:41.518834 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1527bf09-6ef0-4fda-a749-9f6440f17159","Type":"ContainerStarted","Data":"73e95992646d1d55b4209a4a2f289109e0253cb545b46dddaffad5ff9cd33c0e"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.400879 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.454133 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.762970589 podStartE2EDuration="11.45410408s" podCreationTimestamp="2025-09-30 17:34:31 +0000 UTC" firstStartedPulling="2025-09-30 17:34:33.612396368 +0000 UTC m=+1130.907833926" lastFinishedPulling="2025-09-30 17:34:40.303529859 +0000 UTC m=+1137.598967417" observedRunningTime="2025-09-30 17:34:41.540431266 +0000 UTC m=+1138.835868834" watchObservedRunningTime="2025-09-30 17:34:42.45410408 +0000 UTC m=+1139.749541638" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.512656 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.513026 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-log" containerID="cri-o://b0a6f887c1666cdc016e916168b293a31b068992aebfb6998cb2debf8123edf8" gracePeriod=30 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.513737 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-httpd" containerID="cri-o://f550d9ff6855f9a686dded1448da6fd152d2f73ad76fd29f8072555db5c56c68" gracePeriod=30 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.588927 4688 generic.go:334] "Generic (PLEG): container finished" podID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerID="62934cb518d898fe7bb2900891a3a2743470ba347554470984cbf411ed3e0443" exitCode=143 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.589014 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerDied","Data":"62934cb518d898fe7bb2900891a3a2743470ba347554470984cbf411ed3e0443"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.594859 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1527bf09-6ef0-4fda-a749-9f6440f17159","Type":"ContainerStarted","Data":"cc5c718f8375c4f956d05eb4f2bf92691b28384bbb58243074a71f14e24ee093"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606811 4688 generic.go:334] "Generic (PLEG): container finished" podID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerID="58ca7c9778c61a2b385cd52b87375d924df3b88af67d838ba6c7f265aaff2ee2" exitCode=0 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606853 4688 generic.go:334] "Generic (PLEG): container finished" podID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerID="4e9793b3f34067ca1d1930517d28508bfe48370d33a6906b9819b5d81b3f6d14" exitCode=2 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606866 4688 generic.go:334] "Generic (PLEG): container finished" podID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerID="d85899ad6aac8091a73c55948ddb1ef755e2d1c3a2c140d1c5fe98f2e97eece2" exitCode=0 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606878 4688 generic.go:334] "Generic (PLEG): container finished" podID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerID="e4c9325653ef7264455f2a1e7b6389ab045d5d97e98c45329514e2f2a181108b" exitCode=0 Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606901 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerDied","Data":"58ca7c9778c61a2b385cd52b87375d924df3b88af67d838ba6c7f265aaff2ee2"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606974 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerDied","Data":"4e9793b3f34067ca1d1930517d28508bfe48370d33a6906b9819b5d81b3f6d14"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606987 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerDied","Data":"d85899ad6aac8091a73c55948ddb1ef755e2d1c3a2c140d1c5fe98f2e97eece2"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.606996 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerDied","Data":"e4c9325653ef7264455f2a1e7b6389ab045d5d97e98c45329514e2f2a181108b"} Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.862070 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.945845 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946336 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946415 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946444 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946577 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946833 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.946898 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwglm\" (UniqueName: \"kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm\") pod \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\" (UID: \"9683a077-e82d-4eaa-a3c2-254d0e7028e6\") " Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.947224 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.947424 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.948045 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.948069 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9683a077-e82d-4eaa-a3c2-254d0e7028e6-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.954086 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts" (OuterVolumeSpecName: "scripts") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.954176 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm" (OuterVolumeSpecName: "kube-api-access-pwglm") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "kube-api-access-pwglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:42 crc kubenswrapper[4688]: I0930 17:34:42.996939 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.050480 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwglm\" (UniqueName: \"kubernetes.io/projected/9683a077-e82d-4eaa-a3c2-254d0e7028e6-kube-api-access-pwglm\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.050514 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.050523 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.052937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.087815 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data" (OuterVolumeSpecName: "config-data") pod "9683a077-e82d-4eaa-a3c2-254d0e7028e6" (UID: "9683a077-e82d-4eaa-a3c2-254d0e7028e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.152006 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.152045 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9683a077-e82d-4eaa-a3c2-254d0e7028e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.618739 4688 generic.go:334] "Generic (PLEG): container finished" podID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerID="b0a6f887c1666cdc016e916168b293a31b068992aebfb6998cb2debf8123edf8" exitCode=143 Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.618835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerDied","Data":"b0a6f887c1666cdc016e916168b293a31b068992aebfb6998cb2debf8123edf8"} Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.623998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9683a077-e82d-4eaa-a3c2-254d0e7028e6","Type":"ContainerDied","Data":"aabb9a16d19af376c687236e3727d17402637ae03ea8c71282f3625298dd061b"} Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.624113 4688 scope.go:117] "RemoveContainer" containerID="58ca7c9778c61a2b385cd52b87375d924df3b88af67d838ba6c7f265aaff2ee2" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.624793 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.626470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1527bf09-6ef0-4fda-a749-9f6440f17159","Type":"ContainerStarted","Data":"c1320b7ee3f171a5bd9067494c268021b2a2a73ccb97101843cf99085c275617"} Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.626879 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.653002 4688 scope.go:117] "RemoveContainer" containerID="4e9793b3f34067ca1d1930517d28508bfe48370d33a6906b9819b5d81b3f6d14" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.662383 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.662356288 podStartE2EDuration="3.662356288s" podCreationTimestamp="2025-09-30 17:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:43.657648416 +0000 UTC m=+1140.953085984" watchObservedRunningTime="2025-09-30 17:34:43.662356288 +0000 UTC m=+1140.957793846" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.686609 4688 scope.go:117] "RemoveContainer" containerID="d85899ad6aac8091a73c55948ddb1ef755e2d1c3a2c140d1c5fe98f2e97eece2" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.690216 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.710730 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.723879 4688 scope.go:117] "RemoveContainer" containerID="e4c9325653ef7264455f2a1e7b6389ab045d5d97e98c45329514e2f2a181108b" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.731903 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:43 crc kubenswrapper[4688]: E0930 17:34:43.732369 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-notification-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732390 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-notification-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: E0930 17:34:43.732405 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="sg-core" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732411 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="sg-core" Sep 30 17:34:43 crc kubenswrapper[4688]: E0930 17:34:43.732444 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-central-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732451 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-central-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: E0930 17:34:43.732460 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="proxy-httpd" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732467 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="proxy-httpd" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732752 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="sg-core" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732767 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-central-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732777 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="ceilometer-notification-agent" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.732786 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" containerName="proxy-httpd" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.734517 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.741221 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.744562 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.749953 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.765561 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.765742 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.765790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.765874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.765967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.766035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnkf\" (UniqueName: \"kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.766052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868194 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnkf\" (UniqueName: \"kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.868394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.869232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.871004 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.877153 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.877667 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.877669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.877949 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:43 crc kubenswrapper[4688]: I0930 17:34:43.892044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnkf\" (UniqueName: \"kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf\") pod \"ceilometer-0\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " pod="openstack/ceilometer-0" Sep 30 17:34:44 crc kubenswrapper[4688]: I0930 17:34:44.073396 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:44 crc kubenswrapper[4688]: I0930 17:34:44.572210 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:44 crc kubenswrapper[4688]: I0930 17:34:44.640594 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerStarted","Data":"4cfc640c6947aa8992f42e7418e669b0d14a7de36b78a9f02687116feedfcee5"} Sep 30 17:34:45 crc kubenswrapper[4688]: I0930 17:34:45.441926 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9683a077-e82d-4eaa-a3c2-254d0e7028e6" path="/var/lib/kubelet/pods/9683a077-e82d-4eaa-a3c2-254d0e7028e6/volumes" Sep 30 17:34:45 crc kubenswrapper[4688]: I0930 17:34:45.655975 4688 generic.go:334] "Generic (PLEG): container finished" podID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerID="6d21aaaa28eb918312e391e7a5819cd1bc060be08e1f0d91b17b81b40ac19d2b" exitCode=0 Sep 30 17:34:45 crc kubenswrapper[4688]: I0930 17:34:45.657614 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerDied","Data":"6d21aaaa28eb918312e391e7a5819cd1bc060be08e1f0d91b17b81b40ac19d2b"} Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.377464 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.427661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.427717 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.427746 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.427779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drc2\" (UniqueName: \"kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.427923 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.428004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.428039 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.428101 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs\") pod \"45dadd69-d1a4-41fe-8e93-81a835ce2501\" (UID: \"45dadd69-d1a4-41fe-8e93-81a835ce2501\") " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.437196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs" (OuterVolumeSpecName: "logs") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.485140 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts" (OuterVolumeSpecName: "scripts") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.485337 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.485419 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.485777 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2" (OuterVolumeSpecName: "kube-api-access-4drc2") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "kube-api-access-4drc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.539579 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drc2\" (UniqueName: \"kubernetes.io/projected/45dadd69-d1a4-41fe-8e93-81a835ce2501-kube-api-access-4drc2\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.539663 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.539685 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.539705 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45dadd69-d1a4-41fe-8e93-81a835ce2501-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.539718 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.542992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data" (OuterVolumeSpecName: "config-data") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.629915 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.642980 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.643019 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.657844 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45dadd69-d1a4-41fe-8e93-81a835ce2501" (UID: "45dadd69-d1a4-41fe-8e93-81a835ce2501"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.661794 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.686641 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerStarted","Data":"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8"} Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.690669 4688 generic.go:334] "Generic (PLEG): container finished" podID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerID="f550d9ff6855f9a686dded1448da6fd152d2f73ad76fd29f8072555db5c56c68" exitCode=0 Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.690732 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerDied","Data":"f550d9ff6855f9a686dded1448da6fd152d2f73ad76fd29f8072555db5c56c68"} Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.699866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"45dadd69-d1a4-41fe-8e93-81a835ce2501","Type":"ContainerDied","Data":"f8fabf17368c4650adcb98abf0b3400efecd1d6101b8c1370b70e61227f6dbc8"} Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.699936 4688 scope.go:117] "RemoveContainer" containerID="6d21aaaa28eb918312e391e7a5819cd1bc060be08e1f0d91b17b81b40ac19d2b" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.700141 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.743485 4688 scope.go:117] "RemoveContainer" containerID="62934cb518d898fe7bb2900891a3a2743470ba347554470984cbf411ed3e0443" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.745371 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.756042 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45dadd69-d1a4-41fe-8e93-81a835ce2501-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.821960 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.839665 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.847520 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:46 crc kubenswrapper[4688]: E0930 17:34:46.848103 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-log" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.848120 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-log" Sep 30 17:34:46 crc kubenswrapper[4688]: E0930 17:34:46.848131 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-httpd" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.848137 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-httpd" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.848331 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-httpd" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.848348 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" containerName="glance-log" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.856582 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.861717 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.861995 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.883282 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.959859 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.959992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58w7\" (UniqueName: \"kubernetes.io/projected/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-kube-api-access-v58w7\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960023 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960117 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960166 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:46 crc kubenswrapper[4688]: I0930 17:34:46.960194 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.028595 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.062068 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58w7\" (UniqueName: \"kubernetes.io/projected/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-kube-api-access-v58w7\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.062512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063408 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.063527 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.064120 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.064428 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.064748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.077682 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.078861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.080007 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.083704 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.090036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58w7\" (UniqueName: \"kubernetes.io/projected/59dd37af-4be8-4c5c-900f-efbd6c09fcf7-kube-api-access-v58w7\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.152409 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"59dd37af-4be8-4c5c-900f-efbd6c09fcf7\") " pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166507 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166633 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166677 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166746 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4dqf\" (UniqueName: \"kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166828 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166925 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.166960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs\") pod \"d21528b5-a44f-47ec-9a81-6e408908f80d\" (UID: \"d21528b5-a44f-47ec-9a81-6e408908f80d\") " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.169099 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.169353 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs" (OuterVolumeSpecName: "logs") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.174734 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf" (OuterVolumeSpecName: "kube-api-access-t4dqf") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "kube-api-access-t4dqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.180916 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.183277 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts" (OuterVolumeSpecName: "scripts") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.248889 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.249406 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271383 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271424 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271434 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d21528b5-a44f-47ec-9a81-6e408908f80d-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271442 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271452 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271461 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.271470 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4dqf\" (UniqueName: \"kubernetes.io/projected/d21528b5-a44f-47ec-9a81-6e408908f80d-kube-api-access-t4dqf\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.285703 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data" (OuterVolumeSpecName: "config-data") pod "d21528b5-a44f-47ec-9a81-6e408908f80d" (UID: "d21528b5-a44f-47ec-9a81-6e408908f80d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.309285 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.342973 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.376454 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.376503 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21528b5-a44f-47ec-9a81-6e408908f80d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.448459 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45dadd69-d1a4-41fe-8e93-81a835ce2501" path="/var/lib/kubelet/pods/45dadd69-d1a4-41fe-8e93-81a835ce2501/volumes" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.731632 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerStarted","Data":"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d"} Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.735710 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d21528b5-a44f-47ec-9a81-6e408908f80d","Type":"ContainerDied","Data":"8d9e34c96578036e0b3c373db441706aab1bf56010be37d06d33bc2ecde9b95b"} Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.735757 4688 scope.go:117] "RemoveContainer" containerID="f550d9ff6855f9a686dded1448da6fd152d2f73ad76fd29f8072555db5c56c68" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.735925 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.845065 4688 scope.go:117] "RemoveContainer" containerID="b0a6f887c1666cdc016e916168b293a31b068992aebfb6998cb2debf8123edf8" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.882748 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.901677 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.920241 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:47 crc kubenswrapper[4688]: E0930 17:34:47.920710 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-httpd" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.920731 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-httpd" Sep 30 17:34:47 crc kubenswrapper[4688]: E0930 17:34:47.920743 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-log" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.920751 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-log" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.920957 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-log" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.920970 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" containerName="glance-httpd" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.922121 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.926084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.926332 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 17:34:47 crc kubenswrapper[4688]: I0930 17:34:47.946849 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.064561 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116043 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-logs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116213 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116296 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116343 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116381 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxh9\" (UniqueName: \"kubernetes.io/projected/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-kube-api-access-5fxh9\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.116407 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220159 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-logs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220263 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220365 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220419 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220471 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxh9\" (UniqueName: \"kubernetes.io/projected/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-kube-api-access-5fxh9\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.220596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.221047 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.221423 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-logs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.221947 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.272537 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.277489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.277996 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.280779 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxh9\" (UniqueName: \"kubernetes.io/projected/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-kube-api-access-5fxh9\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.304484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.320663 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4\") " pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.547374 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.763832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59dd37af-4be8-4c5c-900f-efbd6c09fcf7","Type":"ContainerStarted","Data":"fd7c61aa6732bf87b02eead5b60c8e0d86c0a1ae3676c961ba5c6aa5549df14d"} Sep 30 17:34:48 crc kubenswrapper[4688]: I0930 17:34:48.794465 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerStarted","Data":"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192"} Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.197977 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 17:34:49 crc kubenswrapper[4688]: W0930 17:34:49.202256 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7cc8d10_5ef9_4db6_bccc_c739bbd9faa4.slice/crio-a84c72240fe44e179918fe196338906eadb3853b6a5bf0bf878a41b01f4653b8 WatchSource:0}: Error finding container a84c72240fe44e179918fe196338906eadb3853b6a5bf0bf878a41b01f4653b8: Status 404 returned error can't find the container with id a84c72240fe44e179918fe196338906eadb3853b6a5bf0bf878a41b01f4653b8 Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.456254 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21528b5-a44f-47ec-9a81-6e408908f80d" path="/var/lib/kubelet/pods/d21528b5-a44f-47ec-9a81-6e408908f80d/volumes" Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.810314 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59dd37af-4be8-4c5c-900f-efbd6c09fcf7","Type":"ContainerStarted","Data":"035a876f096964cb270162a738380550aadc64ee3919eae102162f77fa76fe19"} Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.810837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"59dd37af-4be8-4c5c-900f-efbd6c09fcf7","Type":"ContainerStarted","Data":"2f072d7e82c1f6dadf30f159e0580928db4dd4ae89f2a289b2f132c5c8471475"} Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.814538 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4","Type":"ContainerStarted","Data":"a84c72240fe44e179918fe196338906eadb3853b6a5bf0bf878a41b01f4653b8"} Sep 30 17:34:49 crc kubenswrapper[4688]: I0930 17:34:49.840924 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.840895811 podStartE2EDuration="3.840895811s" podCreationTimestamp="2025-09-30 17:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:49.830286996 +0000 UTC m=+1147.125724554" watchObservedRunningTime="2025-09-30 17:34:49.840895811 +0000 UTC m=+1147.136333369" Sep 30 17:34:50 crc kubenswrapper[4688]: I0930 17:34:50.825175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4","Type":"ContainerStarted","Data":"f67d85984b6a0e25cb420e388b789203a5e503244515361951e289496a9e1f16"} Sep 30 17:34:50 crc kubenswrapper[4688]: I0930 17:34:50.825905 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4","Type":"ContainerStarted","Data":"c76dd5f9849ef76c5e19723b0312b6920399bb9c3e92694d9fda6d387d9fd2b2"} Sep 30 17:34:50 crc kubenswrapper[4688]: I0930 17:34:50.830116 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerStarted","Data":"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b"} Sep 30 17:34:50 crc kubenswrapper[4688]: I0930 17:34:50.855200 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.855173705 podStartE2EDuration="3.855173705s" podCreationTimestamp="2025-09-30 17:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:34:50.84648487 +0000 UTC m=+1148.141922448" watchObservedRunningTime="2025-09-30 17:34:50.855173705 +0000 UTC m=+1148.150611263" Sep 30 17:34:50 crc kubenswrapper[4688]: I0930 17:34:50.878789 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.732692797 podStartE2EDuration="7.878766365s" podCreationTimestamp="2025-09-30 17:34:43 +0000 UTC" firstStartedPulling="2025-09-30 17:34:44.571618938 +0000 UTC m=+1141.867056496" lastFinishedPulling="2025-09-30 17:34:49.717692506 +0000 UTC m=+1147.013130064" observedRunningTime="2025-09-30 17:34:50.87164356 +0000 UTC m=+1148.167081118" watchObservedRunningTime="2025-09-30 17:34:50.878766365 +0000 UTC m=+1148.174203913" Sep 30 17:34:51 crc kubenswrapper[4688]: I0930 17:34:51.840230 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.545752 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.546158 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.547849 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.548516 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.548669 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d" gracePeriod=600 Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.853043 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d" exitCode=0 Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.853128 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d"} Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.854757 4688 scope.go:117] "RemoveContainer" containerID="26d311e67dd58a633fdcdd12ef828cb6c66054ad9ffec131080d59786bc5b556" Sep 30 17:34:52 crc kubenswrapper[4688]: I0930 17:34:52.995944 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 30 17:34:53 crc kubenswrapper[4688]: I0930 17:34:53.867476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc"} Sep 30 17:34:54 crc kubenswrapper[4688]: I0930 17:34:54.311930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:34:54 crc kubenswrapper[4688]: E0930 17:34:54.312656 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:34:54 crc kubenswrapper[4688]: E0930 17:34:54.312684 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:34:54 crc kubenswrapper[4688]: E0930 17:34:54.312742 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:35:26.312720065 +0000 UTC m=+1183.608157623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.040688 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8d22-account-create-xdvrr"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.045970 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.057990 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.098325 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8d22-account-create-xdvrr"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.130047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5625\" (UniqueName: \"kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625\") pod \"nova-api-8d22-account-create-xdvrr\" (UID: \"fc67f708-6f4c-4465-acae-c723b8c054a9\") " pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.236105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5625\" (UniqueName: \"kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625\") pod \"nova-api-8d22-account-create-xdvrr\" (UID: \"fc67f708-6f4c-4465-acae-c723b8c054a9\") " pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.244683 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fde3-account-create-r75vz"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.246491 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.249100 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.281177 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fde3-account-create-r75vz"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.289035 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5625\" (UniqueName: \"kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625\") pod \"nova-api-8d22-account-create-xdvrr\" (UID: \"fc67f708-6f4c-4465-acae-c723b8c054a9\") " pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.337968 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh62\" (UniqueName: \"kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62\") pod \"nova-cell0-fde3-account-create-r75vz\" (UID: \"32279c8a-a972-4b12-ab61-206e2606a3dc\") " pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.392186 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.440669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zh62\" (UniqueName: \"kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62\") pod \"nova-cell0-fde3-account-create-r75vz\" (UID: \"32279c8a-a972-4b12-ab61-206e2606a3dc\") " pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.467414 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zh62\" (UniqueName: \"kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62\") pod \"nova-cell0-fde3-account-create-r75vz\" (UID: \"32279c8a-a972-4b12-ab61-206e2606a3dc\") " pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.494546 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4ab9-account-create-bphg2"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.496054 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ab9-account-create-bphg2"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.496219 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.500469 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.544884 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7bg\" (UniqueName: \"kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg\") pod \"nova-cell1-4ab9-account-create-bphg2\" (UID: \"1de52239-675f-4b7b-bac5-794f9493a77f\") " pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.574099 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.646417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt7bg\" (UniqueName: \"kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg\") pod \"nova-cell1-4ab9-account-create-bphg2\" (UID: \"1de52239-675f-4b7b-bac5-794f9493a77f\") " pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.676932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt7bg\" (UniqueName: \"kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg\") pod \"nova-cell1-4ab9-account-create-bphg2\" (UID: \"1de52239-675f-4b7b-bac5-794f9493a77f\") " pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.867408 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:55 crc kubenswrapper[4688]: W0930 17:34:55.968136 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32279c8a_a972_4b12_ab61_206e2606a3dc.slice/crio-0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7 WatchSource:0}: Error finding container 0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7: Status 404 returned error can't find the container with id 0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7 Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.969845 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8d22-account-create-xdvrr"] Sep 30 17:34:55 crc kubenswrapper[4688]: I0930 17:34:55.987713 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fde3-account-create-r75vz"] Sep 30 17:34:55 crc kubenswrapper[4688]: W0930 17:34:55.987856 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc67f708_6f4c_4465_acae_c723b8c054a9.slice/crio-aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8 WatchSource:0}: Error finding container aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8: Status 404 returned error can't find the container with id aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8 Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.187702 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ab9-account-create-bphg2"] Sep 30 17:34:56 crc kubenswrapper[4688]: W0930 17:34:56.213725 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de52239_675f_4b7b_bac5_794f9493a77f.slice/crio-f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20 WatchSource:0}: Error finding container f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20: Status 404 returned error can't find the container with id f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20 Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.910091 4688 generic.go:334] "Generic (PLEG): container finished" podID="1de52239-675f-4b7b-bac5-794f9493a77f" containerID="e3d56a6086565e02522e9de4c75ff3d121574b8576364d578ead13aae1a30fe1" exitCode=0 Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.910161 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab9-account-create-bphg2" event={"ID":"1de52239-675f-4b7b-bac5-794f9493a77f","Type":"ContainerDied","Data":"e3d56a6086565e02522e9de4c75ff3d121574b8576364d578ead13aae1a30fe1"} Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.910194 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab9-account-create-bphg2" event={"ID":"1de52239-675f-4b7b-bac5-794f9493a77f","Type":"ContainerStarted","Data":"f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20"} Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.912805 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc67f708-6f4c-4465-acae-c723b8c054a9" containerID="631140fbefdf517056a9847780b117a956f65248a7a92bad895e0fa2ab2cac03" exitCode=0 Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.912852 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d22-account-create-xdvrr" event={"ID":"fc67f708-6f4c-4465-acae-c723b8c054a9","Type":"ContainerDied","Data":"631140fbefdf517056a9847780b117a956f65248a7a92bad895e0fa2ab2cac03"} Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.912890 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d22-account-create-xdvrr" event={"ID":"fc67f708-6f4c-4465-acae-c723b8c054a9","Type":"ContainerStarted","Data":"aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8"} Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.915553 4688 generic.go:334] "Generic (PLEG): container finished" podID="32279c8a-a972-4b12-ab61-206e2606a3dc" containerID="4817a472d4fd71ca7c0a37dc483c2f83e41e342bb2737b7f52382438929aa0fc" exitCode=0 Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.915612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fde3-account-create-r75vz" event={"ID":"32279c8a-a972-4b12-ab61-206e2606a3dc","Type":"ContainerDied","Data":"4817a472d4fd71ca7c0a37dc483c2f83e41e342bb2737b7f52382438929aa0fc"} Sep 30 17:34:56 crc kubenswrapper[4688]: I0930 17:34:56.915635 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fde3-account-create-r75vz" event={"ID":"32279c8a-a972-4b12-ab61-206e2606a3dc","Type":"ContainerStarted","Data":"0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7"} Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.344514 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.344886 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.386083 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.395511 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.505998 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.506527 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-central-agent" containerID="cri-o://ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" gracePeriod=30 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.507247 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="proxy-httpd" containerID="cri-o://a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" gracePeriod=30 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.507306 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="sg-core" containerID="cri-o://68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" gracePeriod=30 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.507345 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-notification-agent" containerID="cri-o://2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" gracePeriod=30 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.928074 4688 generic.go:334] "Generic (PLEG): container finished" podID="70d6b938-a049-413b-92d7-35867d7ae291" containerID="a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" exitCode=0 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.928115 4688 generic.go:334] "Generic (PLEG): container finished" podID="70d6b938-a049-413b-92d7-35867d7ae291" containerID="68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" exitCode=2 Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.928179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerDied","Data":"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b"} Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.928288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerDied","Data":"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192"} Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.928804 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:57 crc kubenswrapper[4688]: I0930 17:34:57.929100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.295811 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.431370 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt7bg\" (UniqueName: \"kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg\") pod \"1de52239-675f-4b7b-bac5-794f9493a77f\" (UID: \"1de52239-675f-4b7b-bac5-794f9493a77f\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.445930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg" (OuterVolumeSpecName: "kube-api-access-lt7bg") pod "1de52239-675f-4b7b-bac5-794f9493a77f" (UID: "1de52239-675f-4b7b-bac5-794f9493a77f"). InnerVolumeSpecName "kube-api-access-lt7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.457216 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.542513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5625\" (UniqueName: \"kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625\") pod \"fc67f708-6f4c-4465-acae-c723b8c054a9\" (UID: \"fc67f708-6f4c-4465-acae-c723b8c054a9\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.543373 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt7bg\" (UniqueName: \"kubernetes.io/projected/1de52239-675f-4b7b-bac5-794f9493a77f-kube-api-access-lt7bg\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.549888 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.550684 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.550929 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625" (OuterVolumeSpecName: "kube-api-access-c5625") pod "fc67f708-6f4c-4465-acae-c723b8c054a9" (UID: "fc67f708-6f4c-4465-acae-c723b8c054a9"). InnerVolumeSpecName "kube-api-access-c5625". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.602810 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.610045 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.621137 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.645819 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5625\" (UniqueName: \"kubernetes.io/projected/fc67f708-6f4c-4465-acae-c723b8c054a9-kube-api-access-c5625\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.697614 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.747649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zh62\" (UniqueName: \"kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62\") pod \"32279c8a-a972-4b12-ab61-206e2606a3dc\" (UID: \"32279c8a-a972-4b12-ab61-206e2606a3dc\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.753873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62" (OuterVolumeSpecName: "kube-api-access-7zh62") pod "32279c8a-a972-4b12-ab61-206e2606a3dc" (UID: "32279c8a-a972-4b12-ab61-206e2606a3dc"). InnerVolumeSpecName "kube-api-access-7zh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.849749 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.849911 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.849958 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.849997 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.850072 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.850169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.850217 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnkf\" (UniqueName: \"kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf\") pod \"70d6b938-a049-413b-92d7-35867d7ae291\" (UID: \"70d6b938-a049-413b-92d7-35867d7ae291\") " Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.850908 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zh62\" (UniqueName: \"kubernetes.io/projected/32279c8a-a972-4b12-ab61-206e2606a3dc-kube-api-access-7zh62\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.851096 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.851136 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.858950 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts" (OuterVolumeSpecName: "scripts") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.859040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf" (OuterVolumeSpecName: "kube-api-access-kgnkf") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "kube-api-access-kgnkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.879489 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.942469 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8d22-account-create-xdvrr" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.942530 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8d22-account-create-xdvrr" event={"ID":"fc67f708-6f4c-4465-acae-c723b8c054a9","Type":"ContainerDied","Data":"aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.942608 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aedc4c9b2894e1b152874ef1343b2fad24a347476c02d3458278aab8efb07ff8" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.945540 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.947546 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fde3-account-create-r75vz" event={"ID":"32279c8a-a972-4b12-ab61-206e2606a3dc","Type":"ContainerDied","Data":"0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.947611 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0591176d44bd481788e05a3af23684ae1644a338af3ba04bee75824288f5a0c7" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.947692 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fde3-account-create-r75vz" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.953552 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data" (OuterVolumeSpecName: "config-data") pod "70d6b938-a049-413b-92d7-35867d7ae291" (UID: "70d6b938-a049-413b-92d7-35867d7ae291"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955553 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955652 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955667 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955677 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d6b938-a049-413b-92d7-35867d7ae291-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955686 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnkf\" (UniqueName: \"kubernetes.io/projected/70d6b938-a049-413b-92d7-35867d7ae291-kube-api-access-kgnkf\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955695 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.955706 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70d6b938-a049-413b-92d7-35867d7ae291-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958851 4688 generic.go:334] "Generic (PLEG): container finished" podID="70d6b938-a049-413b-92d7-35867d7ae291" containerID="2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" exitCode=0 Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958877 4688 generic.go:334] "Generic (PLEG): container finished" podID="70d6b938-a049-413b-92d7-35867d7ae291" containerID="ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" exitCode=0 Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958925 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerDied","Data":"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerDied","Data":"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958965 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70d6b938-a049-413b-92d7-35867d7ae291","Type":"ContainerDied","Data":"4cfc640c6947aa8992f42e7418e669b0d14a7de36b78a9f02687116feedfcee5"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.958982 4688 scope.go:117] "RemoveContainer" containerID="a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.959130 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.966721 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ab9-account-create-bphg2" event={"ID":"1de52239-675f-4b7b-bac5-794f9493a77f","Type":"ContainerDied","Data":"f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20"} Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.966773 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f453a840c74681e74d23595bd898cb863d84645014e27753f379897396a48b20" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.966856 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ab9-account-create-bphg2" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.968257 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:34:58 crc kubenswrapper[4688]: I0930 17:34:58.968439 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.007832 4688 scope.go:117] "RemoveContainer" containerID="68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.041034 4688 scope.go:117] "RemoveContainer" containerID="2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.043096 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.052605 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.079206 4688 scope.go:117] "RemoveContainer" containerID="ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.116519 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117154 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="proxy-httpd" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117179 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="proxy-httpd" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117204 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de52239-675f-4b7b-bac5-794f9493a77f" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117213 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de52239-675f-4b7b-bac5-794f9493a77f" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117236 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc67f708-6f4c-4465-acae-c723b8c054a9" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117244 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc67f708-6f4c-4465-acae-c723b8c054a9" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117274 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="sg-core" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117282 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="sg-core" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117296 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32279c8a-a972-4b12-ab61-206e2606a3dc" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117305 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="32279c8a-a972-4b12-ab61-206e2606a3dc" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117320 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-notification-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117328 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-notification-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.117341 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-central-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.117364 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-central-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.120794 4688 scope.go:117] "RemoveContainer" containerID="a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.121424 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b\": container with ID starting with a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b not found: ID does not exist" containerID="a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.121470 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b"} err="failed to get container status \"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b\": rpc error: code = NotFound desc = could not find container \"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b\": container with ID starting with a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.121500 4688 scope.go:117] "RemoveContainer" containerID="68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.121834 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192\": container with ID starting with 68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192 not found: ID does not exist" containerID="68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.121861 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192"} err="failed to get container status \"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192\": rpc error: code = NotFound desc = could not find container \"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192\": container with ID starting with 68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192 not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.121877 4688 scope.go:117] "RemoveContainer" containerID="2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.122080 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d\": container with ID starting with 2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d not found: ID does not exist" containerID="2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122098 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d"} err="failed to get container status \"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d\": rpc error: code = NotFound desc = could not find container \"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d\": container with ID starting with 2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122117 4688 scope.go:117] "RemoveContainer" containerID="ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" Sep 30 17:34:59 crc kubenswrapper[4688]: E0930 17:34:59.122341 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8\": container with ID starting with ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8 not found: ID does not exist" containerID="ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122369 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8"} err="failed to get container status \"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8\": rpc error: code = NotFound desc = could not find container \"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8\": container with ID starting with ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8 not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122387 4688 scope.go:117] "RemoveContainer" containerID="a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122600 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b"} err="failed to get container status \"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b\": rpc error: code = NotFound desc = could not find container \"a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b\": container with ID starting with a2ef15901ccbee97f3e588de0f2326c4492c801291f728c8a5546a453f6efe4b not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122620 4688 scope.go:117] "RemoveContainer" containerID="68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122804 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192"} err="failed to get container status \"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192\": rpc error: code = NotFound desc = could not find container \"68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192\": container with ID starting with 68202493d96983364baeee80dd0f95665204e3a30df2808989a6021dd83e1192 not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122821 4688 scope.go:117] "RemoveContainer" containerID="2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.122983 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d"} err="failed to get container status \"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d\": rpc error: code = NotFound desc = could not find container \"2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d\": container with ID starting with 2569d573d3b3390b3144448a6bc617f9da7bbfeaf4c96f42ca749d9bdf34312d not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.123000 4688 scope.go:117] "RemoveContainer" containerID="ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.123198 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8"} err="failed to get container status \"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8\": rpc error: code = NotFound desc = could not find container \"ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8\": container with ID starting with ae19ea38a42ad9359559064811ae73de5800278cf06636167e65dfe76e1d74d8 not found: ID does not exist" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.124939 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="sg-core" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.124981 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="proxy-httpd" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.125006 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-central-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.125018 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="32279c8a-a972-4b12-ab61-206e2606a3dc" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.125037 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d6b938-a049-413b-92d7-35867d7ae291" containerName="ceilometer-notification-agent" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.125047 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc67f708-6f4c-4465-acae-c723b8c054a9" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.125060 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de52239-675f-4b7b-bac5-794f9493a77f" containerName="mariadb-account-create" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.127096 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.127244 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.129801 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.131299 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267097 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267161 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267197 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzgj\" (UniqueName: \"kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267442 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267604 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267751 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.267902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369532 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369657 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369726 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369753 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369781 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzgj\" (UniqueName: \"kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.369842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.370070 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.370276 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.375029 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.375674 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.375716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.376690 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.389444 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzgj\" (UniqueName: \"kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj\") pod \"ceilometer-0\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.444146 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d6b938-a049-413b-92d7-35867d7ae291" path="/var/lib/kubelet/pods/70d6b938-a049-413b-92d7-35867d7ae291/volumes" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.459053 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.972522 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.996129 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:34:59 crc kubenswrapper[4688]: I0930 17:34:59.996160 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.509996 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.557640 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfglj"] Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.559210 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.562816 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.563548 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.563800 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tsn4v" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.565806 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfglj"] Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.594764 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.702871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.702972 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.703505 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.703873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47bz\" (UniqueName: \"kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.786227 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7d6ae60e-e1d8-4807-8061-062b2fde551d" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.805707 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47bz\" (UniqueName: \"kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.805788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.805826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.805920 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.812684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.814333 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.815009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.837483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47bz\" (UniqueName: \"kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz\") pod \"nova-cell0-conductor-db-sync-rfglj\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:00 crc kubenswrapper[4688]: I0930 17:35:00.888347 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.016691 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerStarted","Data":"e387c07027f56ff7a406e525b15d7866bd6396121ba74a7356899208613beedc"} Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.016952 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.016989 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.441208 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfglj"] Sep 30 17:35:01 crc kubenswrapper[4688]: W0930 17:35:01.451554 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod931d7b6d_de48_4402_ac4e_70fcfa27004a.slice/crio-e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033 WatchSource:0}: Error finding container e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033: Status 404 returned error can't find the container with id e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033 Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.523698 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:35:01 crc kubenswrapper[4688]: I0930 17:35:01.628908 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 17:35:02 crc kubenswrapper[4688]: I0930 17:35:02.027425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfglj" event={"ID":"931d7b6d-de48-4402-ac4e-70fcfa27004a","Type":"ContainerStarted","Data":"e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033"} Sep 30 17:35:02 crc kubenswrapper[4688]: I0930 17:35:02.031360 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerStarted","Data":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} Sep 30 17:35:02 crc kubenswrapper[4688]: I0930 17:35:02.031503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerStarted","Data":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} Sep 30 17:35:03 crc kubenswrapper[4688]: I0930 17:35:03.056318 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerStarted","Data":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} Sep 30 17:35:05 crc kubenswrapper[4688]: I0930 17:35:05.084640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerStarted","Data":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} Sep 30 17:35:05 crc kubenswrapper[4688]: I0930 17:35:05.085483 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:35:05 crc kubenswrapper[4688]: I0930 17:35:05.112057 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.680693934 podStartE2EDuration="6.112025182s" podCreationTimestamp="2025-09-30 17:34:59 +0000 UTC" firstStartedPulling="2025-09-30 17:34:59.986631906 +0000 UTC m=+1157.282069464" lastFinishedPulling="2025-09-30 17:35:04.417963154 +0000 UTC m=+1161.713400712" observedRunningTime="2025-09-30 17:35:05.106837328 +0000 UTC m=+1162.402274886" watchObservedRunningTime="2025-09-30 17:35:05.112025182 +0000 UTC m=+1162.407462740" Sep 30 17:35:12 crc kubenswrapper[4688]: I0930 17:35:12.069822 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:12 crc kubenswrapper[4688]: I0930 17:35:12.071106 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-central-agent" containerID="cri-o://98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" gracePeriod=30 Sep 30 17:35:12 crc kubenswrapper[4688]: I0930 17:35:12.071181 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="proxy-httpd" containerID="cri-o://6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" gracePeriod=30 Sep 30 17:35:12 crc kubenswrapper[4688]: I0930 17:35:12.071412 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="sg-core" containerID="cri-o://362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" gracePeriod=30 Sep 30 17:35:12 crc kubenswrapper[4688]: I0930 17:35:12.071407 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-notification-agent" containerID="cri-o://dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" gracePeriod=30 Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.091271 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.194697 4688 generic.go:334] "Generic (PLEG): container finished" podID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" exitCode=0 Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.195065 4688 generic.go:334] "Generic (PLEG): container finished" podID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" exitCode=2 Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.195186 4688 generic.go:334] "Generic (PLEG): container finished" podID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" exitCode=0 Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.195301 4688 generic.go:334] "Generic (PLEG): container finished" podID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" exitCode=0 Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.194972 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.194861 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerDied","Data":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.197529 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerDied","Data":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.197753 4688 scope.go:117] "RemoveContainer" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.197764 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerDied","Data":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.197926 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerDied","Data":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.197944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"773b7a90-7dc5-4abd-844e-3aa21884e390","Type":"ContainerDied","Data":"e387c07027f56ff7a406e525b15d7866bd6396121ba74a7356899208613beedc"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.202659 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfglj" event={"ID":"931d7b6d-de48-4402-ac4e-70fcfa27004a","Type":"ContainerStarted","Data":"2bde185ac7b50523c52dddadbab31b31b527ba70803af40e5a65ee7a627e1909"} Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208269 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208316 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208461 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzgj\" (UniqueName: \"kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208535 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208726 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.208784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd\") pod \"773b7a90-7dc5-4abd-844e-3aa21884e390\" (UID: \"773b7a90-7dc5-4abd-844e-3aa21884e390\") " Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.209009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.209560 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.210215 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.216806 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts" (OuterVolumeSpecName: "scripts") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.217179 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj" (OuterVolumeSpecName: "kube-api-access-pjzgj") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "kube-api-access-pjzgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.228734 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rfglj" podStartSLOduration=2.345411883 podStartE2EDuration="13.228705386s" podCreationTimestamp="2025-09-30 17:35:00 +0000 UTC" firstStartedPulling="2025-09-30 17:35:01.454857033 +0000 UTC m=+1158.750294621" lastFinishedPulling="2025-09-30 17:35:12.338150566 +0000 UTC m=+1169.633588124" observedRunningTime="2025-09-30 17:35:13.219309492 +0000 UTC m=+1170.514747050" watchObservedRunningTime="2025-09-30 17:35:13.228705386 +0000 UTC m=+1170.524142944" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.249473 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.299278 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.312213 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzgj\" (UniqueName: \"kubernetes.io/projected/773b7a90-7dc5-4abd-844e-3aa21884e390-kube-api-access-pjzgj\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.312449 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.312556 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.312806 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/773b7a90-7dc5-4abd-844e-3aa21884e390-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.312902 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.336483 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data" (OuterVolumeSpecName: "config-data") pod "773b7a90-7dc5-4abd-844e-3aa21884e390" (UID: "773b7a90-7dc5-4abd-844e-3aa21884e390"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.414655 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773b7a90-7dc5-4abd-844e-3aa21884e390-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.417896 4688 scope.go:117] "RemoveContainer" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.455965 4688 scope.go:117] "RemoveContainer" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.480694 4688 scope.go:117] "RemoveContainer" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.500805 4688 scope.go:117] "RemoveContainer" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.501482 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": container with ID starting with 6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af not found: ID does not exist" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.501543 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} err="failed to get container status \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": rpc error: code = NotFound desc = could not find container \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": container with ID starting with 6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.501595 4688 scope.go:117] "RemoveContainer" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.502218 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": container with ID starting with 362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d not found: ID does not exist" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.502284 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} err="failed to get container status \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": rpc error: code = NotFound desc = could not find container \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": container with ID starting with 362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.502330 4688 scope.go:117] "RemoveContainer" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.502810 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": container with ID starting with dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464 not found: ID does not exist" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.502841 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} err="failed to get container status \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": rpc error: code = NotFound desc = could not find container \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": container with ID starting with dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464 not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.502864 4688 scope.go:117] "RemoveContainer" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.503205 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": container with ID starting with 98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe not found: ID does not exist" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.503240 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} err="failed to get container status \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": rpc error: code = NotFound desc = could not find container \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": container with ID starting with 98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.503261 4688 scope.go:117] "RemoveContainer" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504221 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} err="failed to get container status \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": rpc error: code = NotFound desc = could not find container \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": container with ID starting with 6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504271 4688 scope.go:117] "RemoveContainer" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504644 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} err="failed to get container status \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": rpc error: code = NotFound desc = could not find container \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": container with ID starting with 362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504673 4688 scope.go:117] "RemoveContainer" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504962 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} err="failed to get container status \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": rpc error: code = NotFound desc = could not find container \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": container with ID starting with dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464 not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.504992 4688 scope.go:117] "RemoveContainer" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.505232 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} err="failed to get container status \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": rpc error: code = NotFound desc = could not find container \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": container with ID starting with 98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.505259 4688 scope.go:117] "RemoveContainer" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.505585 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} err="failed to get container status \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": rpc error: code = NotFound desc = could not find container \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": container with ID starting with 6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.505612 4688 scope.go:117] "RemoveContainer" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.506104 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} err="failed to get container status \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": rpc error: code = NotFound desc = could not find container \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": container with ID starting with 362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.506136 4688 scope.go:117] "RemoveContainer" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507095 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} err="failed to get container status \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": rpc error: code = NotFound desc = could not find container \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": container with ID starting with dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464 not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507124 4688 scope.go:117] "RemoveContainer" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507423 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} err="failed to get container status \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": rpc error: code = NotFound desc = could not find container \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": container with ID starting with 98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507451 4688 scope.go:117] "RemoveContainer" containerID="6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507745 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af"} err="failed to get container status \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": rpc error: code = NotFound desc = could not find container \"6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af\": container with ID starting with 6b56eea7cd88c66ff408f7115ec0ff6ff63292a31ff88fc4120e22f2969875af not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.507774 4688 scope.go:117] "RemoveContainer" containerID="362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.508047 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d"} err="failed to get container status \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": rpc error: code = NotFound desc = could not find container \"362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d\": container with ID starting with 362253d94c6ee95ff0355613daa9459ef72ddaf361e9e1b267ff24de46b95d4d not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.508086 4688 scope.go:117] "RemoveContainer" containerID="dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.508771 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464"} err="failed to get container status \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": rpc error: code = NotFound desc = could not find container \"dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464\": container with ID starting with dbdcd8fdc31e022c4ecd1be405b6dc2b3e4ce6534436fcdeffdceae1d197b464 not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.508804 4688 scope.go:117] "RemoveContainer" containerID="98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.514179 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe"} err="failed to get container status \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": rpc error: code = NotFound desc = could not find container \"98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe\": container with ID starting with 98479c969240b68e65f143499e5e68c8d99c7151a63201243c7d276cd20252fe not found: ID does not exist" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.528375 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.542401 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.555394 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.555982 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="proxy-httpd" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556007 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="proxy-httpd" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.556022 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-central-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556030 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-central-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.556044 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-notification-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556054 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-notification-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: E0930 17:35:13.556108 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="sg-core" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556118 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="sg-core" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556363 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-central-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556382 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="sg-core" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556403 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="proxy-httpd" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.556423 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" containerName="ceilometer-notification-agent" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.558328 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.562052 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.562141 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.564380 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621116 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621180 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkqm\" (UniqueName: \"kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621256 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621302 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621336 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.621523 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724025 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724097 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724193 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724211 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzkqm\" (UniqueName: \"kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.724632 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.725062 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.728378 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.729026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.729518 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.735956 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.757876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzkqm\" (UniqueName: \"kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm\") pod \"ceilometer-0\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " pod="openstack/ceilometer-0" Sep 30 17:35:13 crc kubenswrapper[4688]: I0930 17:35:13.875476 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:14 crc kubenswrapper[4688]: W0930 17:35:14.364680 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode65fb60c_c034_4960_aecb_651fbee16e3a.slice/crio-0823f8e1a077691fd99d89314391e13f6558aada1891be4d30b32ed15345e643 WatchSource:0}: Error finding container 0823f8e1a077691fd99d89314391e13f6558aada1891be4d30b32ed15345e643: Status 404 returned error can't find the container with id 0823f8e1a077691fd99d89314391e13f6558aada1891be4d30b32ed15345e643 Sep 30 17:35:14 crc kubenswrapper[4688]: I0930 17:35:14.370867 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:15 crc kubenswrapper[4688]: I0930 17:35:15.235806 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerStarted","Data":"0823f8e1a077691fd99d89314391e13f6558aada1891be4d30b32ed15345e643"} Sep 30 17:35:15 crc kubenswrapper[4688]: I0930 17:35:15.440987 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773b7a90-7dc5-4abd-844e-3aa21884e390" path="/var/lib/kubelet/pods/773b7a90-7dc5-4abd-844e-3aa21884e390/volumes" Sep 30 17:35:16 crc kubenswrapper[4688]: I0930 17:35:16.251282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerStarted","Data":"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e"} Sep 30 17:35:16 crc kubenswrapper[4688]: I0930 17:35:16.251924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerStarted","Data":"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed"} Sep 30 17:35:17 crc kubenswrapper[4688]: I0930 17:35:17.264974 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerStarted","Data":"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd"} Sep 30 17:35:19 crc kubenswrapper[4688]: I0930 17:35:19.287950 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerStarted","Data":"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946"} Sep 30 17:35:19 crc kubenswrapper[4688]: I0930 17:35:19.288767 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:35:19 crc kubenswrapper[4688]: I0930 17:35:19.317020 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.20530254 podStartE2EDuration="6.31699857s" podCreationTimestamp="2025-09-30 17:35:13 +0000 UTC" firstStartedPulling="2025-09-30 17:35:14.368242145 +0000 UTC m=+1171.663679703" lastFinishedPulling="2025-09-30 17:35:18.479938175 +0000 UTC m=+1175.775375733" observedRunningTime="2025-09-30 17:35:19.314543337 +0000 UTC m=+1176.609980895" watchObservedRunningTime="2025-09-30 17:35:19.31699857 +0000 UTC m=+1176.612436128" Sep 30 17:35:25 crc kubenswrapper[4688]: I0930 17:35:25.349530 4688 generic.go:334] "Generic (PLEG): container finished" podID="931d7b6d-de48-4402-ac4e-70fcfa27004a" containerID="2bde185ac7b50523c52dddadbab31b31b527ba70803af40e5a65ee7a627e1909" exitCode=0 Sep 30 17:35:25 crc kubenswrapper[4688]: I0930 17:35:25.350199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfglj" event={"ID":"931d7b6d-de48-4402-ac4e-70fcfa27004a","Type":"ContainerDied","Data":"2bde185ac7b50523c52dddadbab31b31b527ba70803af40e5a65ee7a627e1909"} Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.319976 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:35:26 crc kubenswrapper[4688]: E0930 17:35:26.320269 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:35:26 crc kubenswrapper[4688]: E0930 17:35:26.320308 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:35:26 crc kubenswrapper[4688]: E0930 17:35:26.320405 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:36:30.320377239 +0000 UTC m=+1247.615814797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.727319 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.829371 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts\") pod \"931d7b6d-de48-4402-ac4e-70fcfa27004a\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.829726 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x47bz\" (UniqueName: \"kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz\") pod \"931d7b6d-de48-4402-ac4e-70fcfa27004a\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.829865 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data\") pod \"931d7b6d-de48-4402-ac4e-70fcfa27004a\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.829903 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle\") pod \"931d7b6d-de48-4402-ac4e-70fcfa27004a\" (UID: \"931d7b6d-de48-4402-ac4e-70fcfa27004a\") " Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.838514 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts" (OuterVolumeSpecName: "scripts") pod "931d7b6d-de48-4402-ac4e-70fcfa27004a" (UID: "931d7b6d-de48-4402-ac4e-70fcfa27004a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.841958 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz" (OuterVolumeSpecName: "kube-api-access-x47bz") pod "931d7b6d-de48-4402-ac4e-70fcfa27004a" (UID: "931d7b6d-de48-4402-ac4e-70fcfa27004a"). InnerVolumeSpecName "kube-api-access-x47bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.863778 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "931d7b6d-de48-4402-ac4e-70fcfa27004a" (UID: "931d7b6d-de48-4402-ac4e-70fcfa27004a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.865062 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data" (OuterVolumeSpecName: "config-data") pod "931d7b6d-de48-4402-ac4e-70fcfa27004a" (UID: "931d7b6d-de48-4402-ac4e-70fcfa27004a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.933074 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x47bz\" (UniqueName: \"kubernetes.io/projected/931d7b6d-de48-4402-ac4e-70fcfa27004a-kube-api-access-x47bz\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.933143 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.933167 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:26 crc kubenswrapper[4688]: I0930 17:35:26.933185 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931d7b6d-de48-4402-ac4e-70fcfa27004a-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.374394 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rfglj" event={"ID":"931d7b6d-de48-4402-ac4e-70fcfa27004a","Type":"ContainerDied","Data":"e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033"} Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.374711 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18ebce9d4551377976ec16e054986303c212488816b37fa4d52a0d2f93ce033" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.374454 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rfglj" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.472522 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:35:27 crc kubenswrapper[4688]: E0930 17:35:27.473070 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931d7b6d-de48-4402-ac4e-70fcfa27004a" containerName="nova-cell0-conductor-db-sync" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.473099 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="931d7b6d-de48-4402-ac4e-70fcfa27004a" containerName="nova-cell0-conductor-db-sync" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.473290 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="931d7b6d-de48-4402-ac4e-70fcfa27004a" containerName="nova-cell0-conductor-db-sync" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.475275 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.480549 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.481049 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tsn4v" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.492066 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.555912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tb5\" (UniqueName: \"kubernetes.io/projected/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-kube-api-access-s5tb5\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.556013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.556114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.658738 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.658976 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.659139 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tb5\" (UniqueName: \"kubernetes.io/projected/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-kube-api-access-s5tb5\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.665083 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.665619 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.688839 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tb5\" (UniqueName: \"kubernetes.io/projected/d73b96c1-ad6b-4c56-a8db-4d4e69ff323e-kube-api-access-s5tb5\") pod \"nova-cell0-conductor-0\" (UID: \"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e\") " pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:27 crc kubenswrapper[4688]: I0930 17:35:27.805757 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:28 crc kubenswrapper[4688]: I0930 17:35:28.299534 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 17:35:28 crc kubenswrapper[4688]: I0930 17:35:28.391309 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e","Type":"ContainerStarted","Data":"8c6d38277440616bf178283874083cbbaa163cbfccf41bccf05dd6aabcf3ac77"} Sep 30 17:35:29 crc kubenswrapper[4688]: I0930 17:35:29.408908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d73b96c1-ad6b-4c56-a8db-4d4e69ff323e","Type":"ContainerStarted","Data":"2227c2ff22a07dadeba3f6bca221f13b5d82fa2f0c9ca277943c5bdaa8dfc23b"} Sep 30 17:35:29 crc kubenswrapper[4688]: I0930 17:35:29.409199 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:29 crc kubenswrapper[4688]: I0930 17:35:29.447336 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.447314163 podStartE2EDuration="2.447314163s" podCreationTimestamp="2025-09-30 17:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:35:29.439143242 +0000 UTC m=+1186.734580890" watchObservedRunningTime="2025-09-30 17:35:29.447314163 +0000 UTC m=+1186.742751721" Sep 30 17:35:37 crc kubenswrapper[4688]: I0930 17:35:37.848428 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.313050 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vxwsz"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.314395 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.317733 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.325298 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.326624 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vxwsz"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.437458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.437518 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.437556 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4n7s\" (UniqueName: \"kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.437652 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.531528 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.563231 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.563596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.570457 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.577337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.577509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.578779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4n7s\" (UniqueName: \"kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.578917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.617122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.631368 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.647513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.679774 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4n7s\" (UniqueName: \"kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s\") pod \"nova-cell0-cell-mapping-vxwsz\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.681033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.683693 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9sl\" (UniqueName: \"kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.683799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.724058 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.726111 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.729127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.768589 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.785958 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.787213 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9sl\" (UniqueName: \"kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.787305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.787456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.788320 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.801920 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.803394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.809583 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.820072 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.827555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9sl\" (UniqueName: \"kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl\") pod \"nova-cell1-novncproxy-0\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.842306 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.844541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.890782 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.891492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.891832 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.891857 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.891880 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7r7t\" (UniqueName: \"kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.891929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2s5\" (UniqueName: \"kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.892007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.892033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.892373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.917384 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.917767 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.919878 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.928096 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.939160 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.939820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995383 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995453 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995468 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7r7t\" (UniqueName: \"kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2s5\" (UniqueName: \"kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995595 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995722 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf298\" (UniqueName: \"kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995745 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.995762 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:38 crc kubenswrapper[4688]: I0930 17:35:38.997066 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.001410 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.001896 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.004983 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.019547 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7r7t\" (UniqueName: \"kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t\") pod \"nova-metadata-0\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " pod="openstack/nova-metadata-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.021336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.023194 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.028390 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2s5\" (UniqueName: \"kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5\") pod \"nova-api-0\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " pod="openstack/nova-api-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.092093 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.098637 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.098710 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf298\" (UniqueName: \"kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.098749 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.099312 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100123 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100182 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100217 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100289 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100437 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkrr\" (UniqueName: \"kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.100468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.102174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.138738 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf298\" (UniqueName: \"kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298\") pod \"dnsmasq-dns-56d99cc479-54c4g\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.202350 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.203809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.203907 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkrr\" (UniqueName: \"kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.203930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.219611 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.221455 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.222901 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.231095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkrr\" (UniqueName: \"kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr\") pod \"nova-scheduler-0\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.245307 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.547136 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p7bwc"] Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.548711 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.551785 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.551851 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.557846 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p7bwc"] Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.612485 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.668818 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vxwsz"] Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.720959 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.721083 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnk5\" (UniqueName: \"kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.721193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.721236 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.826643 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnnk5\" (UniqueName: \"kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.826875 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.826956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.827026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.832151 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.837620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.838291 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.846226 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnnk5\" (UniqueName: \"kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5\") pod \"nova-cell1-conductor-db-sync-p7bwc\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.878052 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:35:39 crc kubenswrapper[4688]: I0930 17:35:39.917256 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:35:39 crc kubenswrapper[4688]: W0930 17:35:39.924730 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41a45cf5_c43e_4725_b761_ce66e8bd136a.slice/crio-2cb147cbc7b823a0b01e4807725b707c201ab165c14f851103f4c970e0b0be4d WatchSource:0}: Error finding container 2cb147cbc7b823a0b01e4807725b707c201ab165c14f851103f4c970e0b0be4d: Status 404 returned error can't find the container with id 2cb147cbc7b823a0b01e4807725b707c201ab165c14f851103f4c970e0b0be4d Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.098699 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.106553 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.136125 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:35:40 crc kubenswrapper[4688]: W0930 17:35:40.142439 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfed8bea4_757d_495a_8740_e46ad74b59e5.slice/crio-41a0688024ccfa5e4f65a6a35133909b4a6fe3187f65e14b7912eda5bbb5adc0 WatchSource:0}: Error finding container 41a0688024ccfa5e4f65a6a35133909b4a6fe3187f65e14b7912eda5bbb5adc0: Status 404 returned error can't find the container with id 41a0688024ccfa5e4f65a6a35133909b4a6fe3187f65e14b7912eda5bbb5adc0 Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.428477 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p7bwc"] Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.543157 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vxwsz" event={"ID":"415f4a12-f67d-4bff-a041-94cc19a0abf4","Type":"ContainerStarted","Data":"d01846ea8c7d40f2eb4b276e89aa57cd66fed47a0e75890edc84c8c9f222a208"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.543235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vxwsz" event={"ID":"415f4a12-f67d-4bff-a041-94cc19a0abf4","Type":"ContainerStarted","Data":"5a1345c7d2579957e031cb8f995eccbbd28906ed49392822887ec9bdec11a071"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.549864 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerStarted","Data":"65bf617ca1648fdb684e08678d878015d99728650f2232460a86f6ef15a22fab"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.551448 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57dca142-01fe-4c2d-85ae-4a49bd945a8f","Type":"ContainerStarted","Data":"822aa2173b423e6577d3b3a245cdeefd0b6ce52ae3205de04951b22deef34899"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.553137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" event={"ID":"3da0bc04-a53e-4880-9a5f-3c6152c8e360","Type":"ContainerStarted","Data":"91ad78f91b1f2136da3437337baf7ccee756957f35881fc918f28e8147b637a1"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.554375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerStarted","Data":"2cb147cbc7b823a0b01e4807725b707c201ab165c14f851103f4c970e0b0be4d"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.555651 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54dc6487-bcec-4dc8-b513-84c6d95294bf","Type":"ContainerStarted","Data":"a282ca3ad5b030d7e22ee36f57f8cf1b3a1ab99de3fea2b95f72d6a4707b342a"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.558277 4688 generic.go:334] "Generic (PLEG): container finished" podID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerID="10bcc35e7974b9ae6756a76393c538d2f56e94eed87403b160236cedc50d7c7c" exitCode=0 Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.558359 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" event={"ID":"fed8bea4-757d-495a-8740-e46ad74b59e5","Type":"ContainerDied","Data":"10bcc35e7974b9ae6756a76393c538d2f56e94eed87403b160236cedc50d7c7c"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.558398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" event={"ID":"fed8bea4-757d-495a-8740-e46ad74b59e5","Type":"ContainerStarted","Data":"41a0688024ccfa5e4f65a6a35133909b4a6fe3187f65e14b7912eda5bbb5adc0"} Sep 30 17:35:40 crc kubenswrapper[4688]: I0930 17:35:40.574437 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vxwsz" podStartSLOduration=2.574412733 podStartE2EDuration="2.574412733s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:35:40.565466362 +0000 UTC m=+1197.860903930" watchObservedRunningTime="2025-09-30 17:35:40.574412733 +0000 UTC m=+1197.869850291" Sep 30 17:35:41 crc kubenswrapper[4688]: I0930 17:35:41.571493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" event={"ID":"3da0bc04-a53e-4880-9a5f-3c6152c8e360","Type":"ContainerStarted","Data":"03f4b6ab198039e44313643be5ca13e96f3aa82ac4ec593d39bf68104b2cebe4"} Sep 30 17:35:41 crc kubenswrapper[4688]: I0930 17:35:41.577220 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" event={"ID":"fed8bea4-757d-495a-8740-e46ad74b59e5","Type":"ContainerStarted","Data":"ae94cef2c2f4f29c0e995b19088286109f415433a9afb44800dacbe69f7210d5"} Sep 30 17:35:41 crc kubenswrapper[4688]: I0930 17:35:41.577317 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:41 crc kubenswrapper[4688]: I0930 17:35:41.603402 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" podStartSLOduration=2.6033623930000003 podStartE2EDuration="2.603362393s" podCreationTimestamp="2025-09-30 17:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:35:41.59744416 +0000 UTC m=+1198.892881718" watchObservedRunningTime="2025-09-30 17:35:41.603362393 +0000 UTC m=+1198.898799961" Sep 30 17:35:41 crc kubenswrapper[4688]: I0930 17:35:41.628254 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" podStartSLOduration=3.628227656 podStartE2EDuration="3.628227656s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:35:41.617361005 +0000 UTC m=+1198.912798583" watchObservedRunningTime="2025-09-30 17:35:41.628227656 +0000 UTC m=+1198.923665214" Sep 30 17:35:42 crc kubenswrapper[4688]: I0930 17:35:42.801993 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:35:42 crc kubenswrapper[4688]: I0930 17:35:42.816835 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:35:43 crc kubenswrapper[4688]: I0930 17:35:43.898390 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.613271 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-log" containerID="cri-o://a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8" gracePeriod=30 Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.613354 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-metadata" containerID="cri-o://e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026" gracePeriod=30 Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.613187 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerStarted","Data":"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.614115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerStarted","Data":"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.618110 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54dc6487-bcec-4dc8-b513-84c6d95294bf","Type":"ContainerStarted","Data":"7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.627979 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerStarted","Data":"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.628045 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerStarted","Data":"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.642798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57dca142-01fe-4c2d-85ae-4a49bd945a8f","Type":"ContainerStarted","Data":"eb2ba3e454a1a4a8f27cf3fe2bfdaf9bc801a47af1cf48d5e230821b716ddf9e"} Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.643379 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eb2ba3e454a1a4a8f27cf3fe2bfdaf9bc801a47af1cf48d5e230821b716ddf9e" gracePeriod=30 Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.649657 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.708980974 podStartE2EDuration="6.64962848s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="2025-09-30 17:35:39.934735014 +0000 UTC m=+1197.230172572" lastFinishedPulling="2025-09-30 17:35:43.87538252 +0000 UTC m=+1201.170820078" observedRunningTime="2025-09-30 17:35:44.642832824 +0000 UTC m=+1201.938270402" watchObservedRunningTime="2025-09-30 17:35:44.64962848 +0000 UTC m=+1201.945066038" Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.687867 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.932738054 podStartE2EDuration="6.687842219s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="2025-09-30 17:35:40.12048376 +0000 UTC m=+1197.415921318" lastFinishedPulling="2025-09-30 17:35:43.875587925 +0000 UTC m=+1201.171025483" observedRunningTime="2025-09-30 17:35:44.666143627 +0000 UTC m=+1201.961581185" watchObservedRunningTime="2025-09-30 17:35:44.687842219 +0000 UTC m=+1201.983279767" Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.754971 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.99600163 podStartE2EDuration="6.754943775s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="2025-09-30 17:35:40.120344176 +0000 UTC m=+1197.415781734" lastFinishedPulling="2025-09-30 17:35:43.879286321 +0000 UTC m=+1201.174723879" observedRunningTime="2025-09-30 17:35:44.703235437 +0000 UTC m=+1201.998673015" watchObservedRunningTime="2025-09-30 17:35:44.754943775 +0000 UTC m=+1202.050381333" Sep 30 17:35:44 crc kubenswrapper[4688]: I0930 17:35:44.759249 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.513897118 podStartE2EDuration="6.759236416s" podCreationTimestamp="2025-09-30 17:35:38 +0000 UTC" firstStartedPulling="2025-09-30 17:35:39.635311288 +0000 UTC m=+1196.930748846" lastFinishedPulling="2025-09-30 17:35:43.880650586 +0000 UTC m=+1201.176088144" observedRunningTime="2025-09-30 17:35:44.74818197 +0000 UTC m=+1202.043619528" watchObservedRunningTime="2025-09-30 17:35:44.759236416 +0000 UTC m=+1202.054673974" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.484399 4688 scope.go:117] "RemoveContainer" containerID="b469b885959a42510c90ffdb1a64c41581f227d10cc7b7f6930a3703bea357cf" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.536828 4688 scope.go:117] "RemoveContainer" containerID="4fa2ae812da948cf20bd914de1843c677d55d59dc0a27db77f51ce6afce82fc0" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.597877 4688 scope.go:117] "RemoveContainer" containerID="a0e342670408e77d3213afe31f58684ceeda1378e2b81ee419f55e2b74fea8d9" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.637385 4688 scope.go:117] "RemoveContainer" containerID="8ac9731275ac5c458a8bf81477397b12d07f42b6fd45192aaea28de585cf83ed" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.661876 4688 generic.go:334] "Generic (PLEG): container finished" podID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerID="a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8" exitCode=143 Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.661977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerDied","Data":"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8"} Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.698292 4688 scope.go:117] "RemoveContainer" containerID="9058bf0b2f88b3066ee835cdeb68d9cdd97a543265eaf3ba6a0aa8401b539c97" Sep 30 17:35:45 crc kubenswrapper[4688]: I0930 17:35:45.741792 4688 scope.go:117] "RemoveContainer" containerID="92bd105638f63d032139ec0e818a1c277a7a3a89762b63fe42fb8b4132e4c025" Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.264378 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.266733 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" containerName="kube-state-metrics" containerID="cri-o://d2fd4a2f3bf3a07ba713bde55401da324d1aadafad23a499e98a1d83b3e8711c" gracePeriod=30 Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.717913 4688 generic.go:334] "Generic (PLEG): container finished" podID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" containerID="d2fd4a2f3bf3a07ba713bde55401da324d1aadafad23a499e98a1d83b3e8711c" exitCode=2 Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.717980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3","Type":"ContainerDied","Data":"d2fd4a2f3bf3a07ba713bde55401da324d1aadafad23a499e98a1d83b3e8711c"} Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.718214 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3","Type":"ContainerDied","Data":"f0d26a4ca0cc35ef51cd390edc9346d5d58fd7bfd3372902092e6ff23d4dd53d"} Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.718231 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d26a4ca0cc35ef51cd390edc9346d5d58fd7bfd3372902092e6ff23d4dd53d" Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.789049 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.869017 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8cqq\" (UniqueName: \"kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq\") pod \"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3\" (UID: \"b0233cd8-f154-4a33-9ff8-fd968ab0a4a3\") " Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.880920 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq" (OuterVolumeSpecName: "kube-api-access-j8cqq") pod "b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" (UID: "b0233cd8-f154-4a33-9ff8-fd968ab0a4a3"). InnerVolumeSpecName "kube-api-access-j8cqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.920277 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:35:48 crc kubenswrapper[4688]: I0930 17:35:48.971927 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8cqq\" (UniqueName: \"kubernetes.io/projected/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3-kube-api-access-j8cqq\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.092862 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.092953 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.204837 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.205213 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.223893 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.246964 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.247063 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.336925 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.337249 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="dnsmasq-dns" containerID="cri-o://2add07d18dd42f31e9b7bc40932090d776c67991def224a28974e3729219caa8" gracePeriod=10 Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.363297 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.728858 4688 generic.go:334] "Generic (PLEG): container finished" podID="415f4a12-f67d-4bff-a041-94cc19a0abf4" containerID="d01846ea8c7d40f2eb4b276e89aa57cd66fed47a0e75890edc84c8c9f222a208" exitCode=0 Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.728954 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vxwsz" event={"ID":"415f4a12-f67d-4bff-a041-94cc19a0abf4","Type":"ContainerDied","Data":"d01846ea8c7d40f2eb4b276e89aa57cd66fed47a0e75890edc84c8c9f222a208"} Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.731526 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerID="2add07d18dd42f31e9b7bc40932090d776c67991def224a28974e3729219caa8" exitCode=0 Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.731601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" event={"ID":"3f7110ad-b1f2-4abf-be08-2b428fbd4c94","Type":"ContainerDied","Data":"2add07d18dd42f31e9b7bc40932090d776c67991def224a28974e3729219caa8"} Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.731903 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.794672 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.809409 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.831903 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:49 crc kubenswrapper[4688]: E0930 17:35:49.832375 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" containerName="kube-state-metrics" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.832394 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" containerName="kube-state-metrics" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.832633 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" containerName="kube-state-metrics" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.833389 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.836330 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.836537 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.861638 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.888373 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.926817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.927420 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.928233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:49 crc kubenswrapper[4688]: I0930 17:35:49.928433 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsbd\" (UniqueName: \"kubernetes.io/projected/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-api-access-vxsbd\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.031118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.031243 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsbd\" (UniqueName: \"kubernetes.io/projected/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-api-access-vxsbd\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.031342 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.031556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.039127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.039309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.041244 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.052945 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsbd\" (UniqueName: \"kubernetes.io/projected/2feddb6b-68dd-45da-aea7-ce4061c8b067-kube-api-access-vxsbd\") pod \"kube-state-metrics-0\" (UID: \"2feddb6b-68dd-45da-aea7-ce4061c8b067\") " pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.157823 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.293961 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.294003 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.648674 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.704366 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.754373 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb\") pod \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.754436 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb\") pod \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.754706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config\") pod \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.754798 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc\") pod \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.754910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4wdv\" (UniqueName: \"kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv\") pod \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\" (UID: \"3f7110ad-b1f2-4abf-be08-2b428fbd4c94\") " Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.765828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv" (OuterVolumeSpecName: "kube-api-access-x4wdv") pod "3f7110ad-b1f2-4abf-be08-2b428fbd4c94" (UID: "3f7110ad-b1f2-4abf-be08-2b428fbd4c94"). InnerVolumeSpecName "kube-api-access-x4wdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.793907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2feddb6b-68dd-45da-aea7-ce4061c8b067","Type":"ContainerStarted","Data":"705410d8f7ab4e61ae48e0989a30d139559b22d2774c5365e2cb539cd1ee6e04"} Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.799342 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.799418 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fff66767-9ptsq" event={"ID":"3f7110ad-b1f2-4abf-be08-2b428fbd4c94","Type":"ContainerDied","Data":"3a77301ac35c165791d35d49f09b8be70759b4971aa9ac60e3646d1006332b40"} Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.799509 4688 scope.go:117] "RemoveContainer" containerID="2add07d18dd42f31e9b7bc40932090d776c67991def224a28974e3729219caa8" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.822295 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config" (OuterVolumeSpecName: "config") pod "3f7110ad-b1f2-4abf-be08-2b428fbd4c94" (UID: "3f7110ad-b1f2-4abf-be08-2b428fbd4c94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.823937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f7110ad-b1f2-4abf-be08-2b428fbd4c94" (UID: "3f7110ad-b1f2-4abf-be08-2b428fbd4c94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.840891 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f7110ad-b1f2-4abf-be08-2b428fbd4c94" (UID: "3f7110ad-b1f2-4abf-be08-2b428fbd4c94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.854179 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f7110ad-b1f2-4abf-be08-2b428fbd4c94" (UID: "3f7110ad-b1f2-4abf-be08-2b428fbd4c94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.858020 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.858057 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.858067 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4wdv\" (UniqueName: \"kubernetes.io/projected/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-kube-api-access-x4wdv\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.858078 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.858087 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f7110ad-b1f2-4abf-be08-2b428fbd4c94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.973843 4688 scope.go:117] "RemoveContainer" containerID="da8b92bc82f6033da9bde46066f8665903835e7758b161f763d922569441d44a" Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.975367 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.975687 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-central-agent" containerID="cri-o://202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e" gracePeriod=30 Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.976167 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="proxy-httpd" containerID="cri-o://839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946" gracePeriod=30 Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.976222 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="sg-core" containerID="cri-o://971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd" gracePeriod=30 Sep 30 17:35:50 crc kubenswrapper[4688]: I0930 17:35:50.976193 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-notification-agent" containerID="cri-o://5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed" gracePeriod=30 Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.195927 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.208613 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.217672 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57fff66767-9ptsq"] Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.266949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts\") pod \"415f4a12-f67d-4bff-a041-94cc19a0abf4\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.267468 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data\") pod \"415f4a12-f67d-4bff-a041-94cc19a0abf4\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.267779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n7s\" (UniqueName: \"kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s\") pod \"415f4a12-f67d-4bff-a041-94cc19a0abf4\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.268017 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle\") pod \"415f4a12-f67d-4bff-a041-94cc19a0abf4\" (UID: \"415f4a12-f67d-4bff-a041-94cc19a0abf4\") " Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.275757 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts" (OuterVolumeSpecName: "scripts") pod "415f4a12-f67d-4bff-a041-94cc19a0abf4" (UID: "415f4a12-f67d-4bff-a041-94cc19a0abf4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.279859 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s" (OuterVolumeSpecName: "kube-api-access-s4n7s") pod "415f4a12-f67d-4bff-a041-94cc19a0abf4" (UID: "415f4a12-f67d-4bff-a041-94cc19a0abf4"). InnerVolumeSpecName "kube-api-access-s4n7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.304193 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data" (OuterVolumeSpecName: "config-data") pod "415f4a12-f67d-4bff-a041-94cc19a0abf4" (UID: "415f4a12-f67d-4bff-a041-94cc19a0abf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.316375 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "415f4a12-f67d-4bff-a041-94cc19a0abf4" (UID: "415f4a12-f67d-4bff-a041-94cc19a0abf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.371657 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.371714 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n7s\" (UniqueName: \"kubernetes.io/projected/415f4a12-f67d-4bff-a041-94cc19a0abf4-kube-api-access-s4n7s\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.371733 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.371773 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f4a12-f67d-4bff-a041-94cc19a0abf4-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.464433 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" path="/var/lib/kubelet/pods/3f7110ad-b1f2-4abf-be08-2b428fbd4c94/volumes" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.465967 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0233cd8-f154-4a33-9ff8-fd968ab0a4a3" path="/var/lib/kubelet/pods/b0233cd8-f154-4a33-9ff8-fd968ab0a4a3/volumes" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.825816 4688 generic.go:334] "Generic (PLEG): container finished" podID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerID="839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946" exitCode=0 Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.826134 4688 generic.go:334] "Generic (PLEG): container finished" podID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerID="971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd" exitCode=2 Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.826143 4688 generic.go:334] "Generic (PLEG): container finished" podID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerID="202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e" exitCode=0 Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.826206 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerDied","Data":"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946"} Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.826238 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerDied","Data":"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd"} Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.826251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerDied","Data":"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e"} Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.835296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vxwsz" event={"ID":"415f4a12-f67d-4bff-a041-94cc19a0abf4","Type":"ContainerDied","Data":"5a1345c7d2579957e031cb8f995eccbbd28906ed49392822887ec9bdec11a071"} Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.835337 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a1345c7d2579957e031cb8f995eccbbd28906ed49392822887ec9bdec11a071" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.835340 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vxwsz" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.841035 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2feddb6b-68dd-45da-aea7-ce4061c8b067","Type":"ContainerStarted","Data":"abe7cbf649017fc9bec3d09bc99d1f98d6dc329a10e53281deb7e26762667954"} Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.841308 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.863880 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.164045559 podStartE2EDuration="2.863856313s" podCreationTimestamp="2025-09-30 17:35:49 +0000 UTC" firstStartedPulling="2025-09-30 17:35:50.659021934 +0000 UTC m=+1207.954459492" lastFinishedPulling="2025-09-30 17:35:51.358832688 +0000 UTC m=+1208.654270246" observedRunningTime="2025-09-30 17:35:51.861651366 +0000 UTC m=+1209.157088944" watchObservedRunningTime="2025-09-30 17:35:51.863856313 +0000 UTC m=+1209.159293881" Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.973363 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.973719 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-log" containerID="cri-o://1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e" gracePeriod=30 Sep 30 17:35:51 crc kubenswrapper[4688]: I0930 17:35:51.973902 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-api" containerID="cri-o://ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b" gracePeriod=30 Sep 30 17:35:52 crc kubenswrapper[4688]: I0930 17:35:52.001415 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:52 crc kubenswrapper[4688]: I0930 17:35:52.002339 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerName="nova-scheduler-scheduler" containerID="cri-o://7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" gracePeriod=30 Sep 30 17:35:52 crc kubenswrapper[4688]: I0930 17:35:52.852646 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerID="1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e" exitCode=143 Sep 30 17:35:52 crc kubenswrapper[4688]: I0930 17:35:52.852711 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerDied","Data":"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e"} Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.797652 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.866021 4688 generic.go:334] "Generic (PLEG): container finished" podID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerID="5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed" exitCode=0 Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.866102 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.866079 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerDied","Data":"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed"} Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.866267 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e65fb60c-c034-4960-aecb-651fbee16e3a","Type":"ContainerDied","Data":"0823f8e1a077691fd99d89314391e13f6558aada1891be4d30b32ed15345e643"} Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.866292 4688 scope.go:117] "RemoveContainer" containerID="839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.906772 4688 scope.go:117] "RemoveContainer" containerID="971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.925530 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.925646 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzkqm\" (UniqueName: \"kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.925811 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.925840 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.925917 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.926013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.926078 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml\") pod \"e65fb60c-c034-4960-aecb-651fbee16e3a\" (UID: \"e65fb60c-c034-4960-aecb-651fbee16e3a\") " Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.930096 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.930616 4688 scope.go:117] "RemoveContainer" containerID="5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.931175 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.933072 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts" (OuterVolumeSpecName: "scripts") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.933662 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm" (OuterVolumeSpecName: "kube-api-access-rzkqm") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "kube-api-access-rzkqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:53 crc kubenswrapper[4688]: I0930 17:35:53.965421 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.019565 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028789 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzkqm\" (UniqueName: \"kubernetes.io/projected/e65fb60c-c034-4960-aecb-651fbee16e3a-kube-api-access-rzkqm\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028834 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028846 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028856 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028866 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e65fb60c-c034-4960-aecb-651fbee16e3a-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.028878 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.058405 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data" (OuterVolumeSpecName: "config-data") pod "e65fb60c-c034-4960-aecb-651fbee16e3a" (UID: "e65fb60c-c034-4960-aecb-651fbee16e3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.076822 4688 scope.go:117] "RemoveContainer" containerID="202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.114014 4688 scope.go:117] "RemoveContainer" containerID="839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.114749 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946\": container with ID starting with 839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946 not found: ID does not exist" containerID="839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.114805 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946"} err="failed to get container status \"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946\": rpc error: code = NotFound desc = could not find container \"839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946\": container with ID starting with 839de50f6527c4e1bc61ce8fa0d3029aaf4a35ec2bab29b207f66ff2d564f946 not found: ID does not exist" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.114839 4688 scope.go:117] "RemoveContainer" containerID="971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.115374 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd\": container with ID starting with 971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd not found: ID does not exist" containerID="971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.115439 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd"} err="failed to get container status \"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd\": rpc error: code = NotFound desc = could not find container \"971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd\": container with ID starting with 971b5c7c9b0853f7d8ab3add8f4cd873ea69a18838b89a7d87a497c26c2ed4fd not found: ID does not exist" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.115482 4688 scope.go:117] "RemoveContainer" containerID="5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.116073 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed\": container with ID starting with 5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed not found: ID does not exist" containerID="5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.116159 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed"} err="failed to get container status \"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed\": rpc error: code = NotFound desc = could not find container \"5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed\": container with ID starting with 5a71b4a4e9d7d36d556d66c0d45eafe8f8f2c3d3a6462e334ecc8037a1dab1ed not found: ID does not exist" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.116245 4688 scope.go:117] "RemoveContainer" containerID="202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.116618 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e\": container with ID starting with 202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e not found: ID does not exist" containerID="202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.116660 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e"} err="failed to get container status \"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e\": rpc error: code = NotFound desc = could not find container \"202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e\": container with ID starting with 202e868d59ec8aa84622eeb1e1c218f1861672420fec68e4cffffcb61cb7f34e not found: ID does not exist" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.130457 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65fb60c-c034-4960-aecb-651fbee16e3a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.208313 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.224075 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.244919 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245411 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="init" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245429 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="init" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245447 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="sg-core" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245454 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="sg-core" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245473 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-central-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245479 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-central-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245497 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415f4a12-f67d-4bff-a041-94cc19a0abf4" containerName="nova-manage" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245504 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="415f4a12-f67d-4bff-a041-94cc19a0abf4" containerName="nova-manage" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245521 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-notification-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245529 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-notification-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245546 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="dnsmasq-dns" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245552 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="dnsmasq-dns" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.245559 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="proxy-httpd" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245580 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="proxy-httpd" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245797 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="sg-core" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245812 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="proxy-httpd" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245823 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7110ad-b1f2-4abf-be08-2b428fbd4c94" containerName="dnsmasq-dns" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245832 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-notification-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245841 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" containerName="ceilometer-central-agent" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.245866 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="415f4a12-f67d-4bff-a041-94cc19a0abf4" containerName="nova-manage" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.247694 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.249119 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.251351 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.251426 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.251585 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.253415 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.257671 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.258930 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:35:54 crc kubenswrapper[4688]: E0930 17:35:54.259042 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerName="nova-scheduler-scheduler" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.334993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335532 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335603 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335719 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l88x\" (UniqueName: \"kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335911 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.335990 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.437938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438460 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438045 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l88x\" (UniqueName: \"kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438833 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438886 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.438915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.439434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.444511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.444759 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.444824 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.445669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.447805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.462332 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l88x\" (UniqueName: \"kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x\") pod \"ceilometer-0\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " pod="openstack/ceilometer-0" Sep 30 17:35:54 crc kubenswrapper[4688]: I0930 17:35:54.588787 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:35:55 crc kubenswrapper[4688]: I0930 17:35:55.052815 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:35:55 crc kubenswrapper[4688]: I0930 17:35:55.441612 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65fb60c-c034-4960-aecb-651fbee16e3a" path="/var/lib/kubelet/pods/e65fb60c-c034-4960-aecb-651fbee16e3a/volumes" Sep 30 17:35:55 crc kubenswrapper[4688]: I0930 17:35:55.888756 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerStarted","Data":"cb0a357a75c546800c3086d007a72a276b9e3f3709021dde0331efb043c8bcd7"} Sep 30 17:35:55 crc kubenswrapper[4688]: E0930 17:35:55.986498 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.837541 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.909474 4688 generic.go:334] "Generic (PLEG): container finished" podID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerID="7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" exitCode=0 Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.909536 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54dc6487-bcec-4dc8-b513-84c6d95294bf","Type":"ContainerDied","Data":"7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c"} Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.911096 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerID="ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b" exitCode=0 Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.911136 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerDied","Data":"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b"} Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.911152 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2fa47eb-d86b-465a-8e28-622b51bd8179","Type":"ContainerDied","Data":"65bf617ca1648fdb684e08678d878015d99728650f2232460a86f6ef15a22fab"} Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.911169 4688 scope.go:117] "RemoveContainer" containerID="ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.911298 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.915317 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.916258 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerStarted","Data":"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8"} Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.916296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerStarted","Data":"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d"} Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.941617 4688 scope.go:117] "RemoveContainer" containerID="1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.964593 4688 scope.go:117] "RemoveContainer" containerID="ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b" Sep 30 17:35:56 crc kubenswrapper[4688]: E0930 17:35:56.965268 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b\": container with ID starting with ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b not found: ID does not exist" containerID="ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.965335 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b"} err="failed to get container status \"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b\": rpc error: code = NotFound desc = could not find container \"ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b\": container with ID starting with ad182933ca198c78383c8aab0b916f9fd3c8fe76879df481dca455659661c53b not found: ID does not exist" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.965376 4688 scope.go:117] "RemoveContainer" containerID="1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e" Sep 30 17:35:56 crc kubenswrapper[4688]: E0930 17:35:56.965825 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e\": container with ID starting with 1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e not found: ID does not exist" containerID="1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e" Sep 30 17:35:56 crc kubenswrapper[4688]: I0930 17:35:56.965863 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e"} err="failed to get container status \"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e\": rpc error: code = NotFound desc = could not find container \"1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e\": container with ID starting with 1b601c17e36422fd3777c150ffca86c608cbb4f5a8d2b9eeb7de859b3d02e91e not found: ID does not exist" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.000953 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx2s5\" (UniqueName: \"kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5\") pod \"f2fa47eb-d86b-465a-8e28-622b51bd8179\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.001345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs\") pod \"f2fa47eb-d86b-465a-8e28-622b51bd8179\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.001392 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle\") pod \"f2fa47eb-d86b-465a-8e28-622b51bd8179\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.001453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data\") pod \"f2fa47eb-d86b-465a-8e28-622b51bd8179\" (UID: \"f2fa47eb-d86b-465a-8e28-622b51bd8179\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.004027 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs" (OuterVolumeSpecName: "logs") pod "f2fa47eb-d86b-465a-8e28-622b51bd8179" (UID: "f2fa47eb-d86b-465a-8e28-622b51bd8179"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.009931 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5" (OuterVolumeSpecName: "kube-api-access-zx2s5") pod "f2fa47eb-d86b-465a-8e28-622b51bd8179" (UID: "f2fa47eb-d86b-465a-8e28-622b51bd8179"). InnerVolumeSpecName "kube-api-access-zx2s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.019523 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.040561 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2fa47eb-d86b-465a-8e28-622b51bd8179" (UID: "f2fa47eb-d86b-465a-8e28-622b51bd8179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.046417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data" (OuterVolumeSpecName: "config-data") pod "f2fa47eb-d86b-465a-8e28-622b51bd8179" (UID: "f2fa47eb-d86b-465a-8e28-622b51bd8179"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.104643 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data\") pod \"54dc6487-bcec-4dc8-b513-84c6d95294bf\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.104822 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle\") pod \"54dc6487-bcec-4dc8-b513-84c6d95294bf\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.105065 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzkrr\" (UniqueName: \"kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr\") pod \"54dc6487-bcec-4dc8-b513-84c6d95294bf\" (UID: \"54dc6487-bcec-4dc8-b513-84c6d95294bf\") " Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.105785 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx2s5\" (UniqueName: \"kubernetes.io/projected/f2fa47eb-d86b-465a-8e28-622b51bd8179-kube-api-access-zx2s5\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.105804 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2fa47eb-d86b-465a-8e28-622b51bd8179-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.105815 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.105825 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fa47eb-d86b-465a-8e28-622b51bd8179-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.111117 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr" (OuterVolumeSpecName: "kube-api-access-kzkrr") pod "54dc6487-bcec-4dc8-b513-84c6d95294bf" (UID: "54dc6487-bcec-4dc8-b513-84c6d95294bf"). InnerVolumeSpecName "kube-api-access-kzkrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.138117 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54dc6487-bcec-4dc8-b513-84c6d95294bf" (UID: "54dc6487-bcec-4dc8-b513-84c6d95294bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.151704 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data" (OuterVolumeSpecName: "config-data") pod "54dc6487-bcec-4dc8-b513-84c6d95294bf" (UID: "54dc6487-bcec-4dc8-b513-84c6d95294bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.208218 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzkrr\" (UniqueName: \"kubernetes.io/projected/54dc6487-bcec-4dc8-b513-84c6d95294bf-kube-api-access-kzkrr\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.208254 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.208264 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54dc6487-bcec-4dc8-b513-84c6d95294bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.248193 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.257251 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.277784 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: E0930 17:35:57.278407 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-log" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278439 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-log" Sep 30 17:35:57 crc kubenswrapper[4688]: E0930 17:35:57.278482 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-api" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278490 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-api" Sep 30 17:35:57 crc kubenswrapper[4688]: E0930 17:35:57.278502 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerName="nova-scheduler-scheduler" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278511 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerName="nova-scheduler-scheduler" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278773 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" containerName="nova-scheduler-scheduler" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278796 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-log" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.278811 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" containerName="nova-api-api" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.284425 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.287779 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.306011 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.412860 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.412950 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt7x\" (UniqueName: \"kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.413104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.413151 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.456232 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2fa47eb-d86b-465a-8e28-622b51bd8179" path="/var/lib/kubelet/pods/f2fa47eb-d86b-465a-8e28-622b51bd8179/volumes" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.515263 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.515382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt7x\" (UniqueName: \"kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.515500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.515527 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.515820 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.521232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.527267 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.535805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt7x\" (UniqueName: \"kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x\") pod \"nova-api-0\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.609984 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.925756 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54dc6487-bcec-4dc8-b513-84c6d95294bf","Type":"ContainerDied","Data":"a282ca3ad5b030d7e22ee36f57f8cf1b3a1ab99de3fea2b95f72d6a4707b342a"} Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.926200 4688 scope.go:117] "RemoveContainer" containerID="7bec149c91c5393c5a9a48b2b954a6f50888de1084a1d2fdaa24d2df8920ff0c" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.925781 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.931006 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerStarted","Data":"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7"} Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.958254 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.975255 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.987260 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.988843 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:57 crc kubenswrapper[4688]: I0930 17:35:57.994889 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.002066 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.090735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:35:58 crc kubenswrapper[4688]: W0930 17:35:58.095158 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf216c00a_b59a_4679_87e9_2d137c20040f.slice/crio-d533d4f76584c69ca747f51c2e00b6392fa6915d1aab3b71df78923433c56216 WatchSource:0}: Error finding container d533d4f76584c69ca747f51c2e00b6392fa6915d1aab3b71df78923433c56216: Status 404 returned error can't find the container with id d533d4f76584c69ca747f51c2e00b6392fa6915d1aab3b71df78923433c56216 Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.128636 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.128740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.129299 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42fs\" (UniqueName: \"kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.231786 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.231952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t42fs\" (UniqueName: \"kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.232034 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.239196 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.240708 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.259196 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t42fs\" (UniqueName: \"kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs\") pod \"nova-scheduler-0\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.315911 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.883010 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.952878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959","Type":"ContainerStarted","Data":"e72c313dfb4272b7eea7b0725290867bb7d8c7ff7fe8617da8e3cc4681ce1374"} Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.965866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerStarted","Data":"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa"} Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.965918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerStarted","Data":"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c"} Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.965930 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerStarted","Data":"d533d4f76584c69ca747f51c2e00b6392fa6915d1aab3b71df78923433c56216"} Sep 30 17:35:58 crc kubenswrapper[4688]: I0930 17:35:58.996187 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.996163267 podStartE2EDuration="1.996163267s" podCreationTimestamp="2025-09-30 17:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:35:58.989804993 +0000 UTC m=+1216.285242551" watchObservedRunningTime="2025-09-30 17:35:58.996163267 +0000 UTC m=+1216.291600825" Sep 30 17:35:59 crc kubenswrapper[4688]: I0930 17:35:59.257682 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:35:59 crc kubenswrapper[4688]: E0930 17:35:59.258139 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:35:59 crc kubenswrapper[4688]: E0930 17:35:59.258161 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:35:59 crc kubenswrapper[4688]: E0930 17:35:59.258227 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:38:01.258206727 +0000 UTC m=+1338.553644285 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:35:59 crc kubenswrapper[4688]: I0930 17:35:59.447248 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dc6487-bcec-4dc8-b513-84c6d95294bf" path="/var/lib/kubelet/pods/54dc6487-bcec-4dc8-b513-84c6d95294bf/volumes" Sep 30 17:35:59 crc kubenswrapper[4688]: I0930 17:35:59.977534 4688 generic.go:334] "Generic (PLEG): container finished" podID="3da0bc04-a53e-4880-9a5f-3c6152c8e360" containerID="03f4b6ab198039e44313643be5ca13e96f3aa82ac4ec593d39bf68104b2cebe4" exitCode=0 Sep 30 17:35:59 crc kubenswrapper[4688]: I0930 17:35:59.977692 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" event={"ID":"3da0bc04-a53e-4880-9a5f-3c6152c8e360","Type":"ContainerDied","Data":"03f4b6ab198039e44313643be5ca13e96f3aa82ac4ec593d39bf68104b2cebe4"} Sep 30 17:35:59 crc kubenswrapper[4688]: I0930 17:35:59.980551 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959","Type":"ContainerStarted","Data":"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310"} Sep 30 17:36:00 crc kubenswrapper[4688]: I0930 17:36:00.016433 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.016412541 podStartE2EDuration="3.016412541s" podCreationTimestamp="2025-09-30 17:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:00.013509016 +0000 UTC m=+1217.308946574" watchObservedRunningTime="2025-09-30 17:36:00.016412541 +0000 UTC m=+1217.311850089" Sep 30 17:36:00 crc kubenswrapper[4688]: I0930 17:36:00.174878 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 30 17:36:00 crc kubenswrapper[4688]: I0930 17:36:00.993610 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerStarted","Data":"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84"} Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.044306 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.669514706 podStartE2EDuration="7.044282292s" podCreationTimestamp="2025-09-30 17:35:54 +0000 UTC" firstStartedPulling="2025-09-30 17:35:55.060160292 +0000 UTC m=+1212.355597850" lastFinishedPulling="2025-09-30 17:36:00.434927878 +0000 UTC m=+1217.730365436" observedRunningTime="2025-09-30 17:36:01.041277434 +0000 UTC m=+1218.336714992" watchObservedRunningTime="2025-09-30 17:36:01.044282292 +0000 UTC m=+1218.339719880" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.560694 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.716308 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data\") pod \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.716723 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle\") pod \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.717063 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnnk5\" (UniqueName: \"kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5\") pod \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.717126 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts\") pod \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\" (UID: \"3da0bc04-a53e-4880-9a5f-3c6152c8e360\") " Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.726379 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5" (OuterVolumeSpecName: "kube-api-access-gnnk5") pod "3da0bc04-a53e-4880-9a5f-3c6152c8e360" (UID: "3da0bc04-a53e-4880-9a5f-3c6152c8e360"). InnerVolumeSpecName "kube-api-access-gnnk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.728056 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts" (OuterVolumeSpecName: "scripts") pod "3da0bc04-a53e-4880-9a5f-3c6152c8e360" (UID: "3da0bc04-a53e-4880-9a5f-3c6152c8e360"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.753801 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data" (OuterVolumeSpecName: "config-data") pod "3da0bc04-a53e-4880-9a5f-3c6152c8e360" (UID: "3da0bc04-a53e-4880-9a5f-3c6152c8e360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.776371 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da0bc04-a53e-4880-9a5f-3c6152c8e360" (UID: "3da0bc04-a53e-4880-9a5f-3c6152c8e360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.821224 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.821256 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.821269 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnnk5\" (UniqueName: \"kubernetes.io/projected/3da0bc04-a53e-4880-9a5f-3c6152c8e360-kube-api-access-gnnk5\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:01 crc kubenswrapper[4688]: I0930 17:36:01.821295 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da0bc04-a53e-4880-9a5f-3c6152c8e360-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.008363 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" event={"ID":"3da0bc04-a53e-4880-9a5f-3c6152c8e360","Type":"ContainerDied","Data":"91ad78f91b1f2136da3437337baf7ccee756957f35881fc918f28e8147b637a1"} Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.008482 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ad78f91b1f2136da3437337baf7ccee756957f35881fc918f28e8147b637a1" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.008410 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-p7bwc" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.008839 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.102471 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:36:02 crc kubenswrapper[4688]: E0930 17:36:02.103286 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da0bc04-a53e-4880-9a5f-3c6152c8e360" containerName="nova-cell1-conductor-db-sync" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.103375 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da0bc04-a53e-4880-9a5f-3c6152c8e360" containerName="nova-cell1-conductor-db-sync" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.103678 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da0bc04-a53e-4880-9a5f-3c6152c8e360" containerName="nova-cell1-conductor-db-sync" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.106104 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.110113 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.116155 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.230430 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.230483 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/357c4a92-d57a-4c57-9cee-7a98de90edff-kube-api-access-w4hrx\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.230658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.333844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.334334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/357c4a92-d57a-4c57-9cee-7a98de90edff-kube-api-access-w4hrx\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.334364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.339862 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.346088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/357c4a92-d57a-4c57-9cee-7a98de90edff-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.372372 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/357c4a92-d57a-4c57-9cee-7a98de90edff-kube-api-access-w4hrx\") pod \"nova-cell1-conductor-0\" (UID: \"357c4a92-d57a-4c57-9cee-7a98de90edff\") " pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.426054 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:02 crc kubenswrapper[4688]: I0930 17:36:02.936882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 17:36:03 crc kubenswrapper[4688]: I0930 17:36:03.022936 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"357c4a92-d57a-4c57-9cee-7a98de90edff","Type":"ContainerStarted","Data":"bde0deb06726f28f62e9ecdb0ca9239cebc29a81df8325be71e3bfad2fff372e"} Sep 30 17:36:03 crc kubenswrapper[4688]: I0930 17:36:03.317050 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:36:04 crc kubenswrapper[4688]: I0930 17:36:04.035338 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"357c4a92-d57a-4c57-9cee-7a98de90edff","Type":"ContainerStarted","Data":"5d76f90ea66802494b9e6bcf3e651c03ea756cce4bf40f72c69574275e615bc2"} Sep 30 17:36:04 crc kubenswrapper[4688]: I0930 17:36:04.035551 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:07 crc kubenswrapper[4688]: I0930 17:36:07.608673 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:07 crc kubenswrapper[4688]: I0930 17:36:07.610190 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:08 crc kubenswrapper[4688]: I0930 17:36:08.317821 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:36:08 crc kubenswrapper[4688]: I0930 17:36:08.361236 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:36:08 crc kubenswrapper[4688]: I0930 17:36:08.389554 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=6.389532285 podStartE2EDuration="6.389532285s" podCreationTimestamp="2025-09-30 17:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:04.067860532 +0000 UTC m=+1221.363298090" watchObservedRunningTime="2025-09-30 17:36:08.389532285 +0000 UTC m=+1225.684969843" Sep 30 17:36:08 crc kubenswrapper[4688]: I0930 17:36:08.690825 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:08 crc kubenswrapper[4688]: I0930 17:36:08.690851 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:09 crc kubenswrapper[4688]: I0930 17:36:09.122625 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:36:12 crc kubenswrapper[4688]: I0930 17:36:12.461507 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.104546 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.161192 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle\") pod \"41a45cf5-c43e-4725-b761-ce66e8bd136a\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.161376 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs\") pod \"41a45cf5-c43e-4725-b761-ce66e8bd136a\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.161562 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data\") pod \"41a45cf5-c43e-4725-b761-ce66e8bd136a\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.161940 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7r7t\" (UniqueName: \"kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t\") pod \"41a45cf5-c43e-4725-b761-ce66e8bd136a\" (UID: \"41a45cf5-c43e-4725-b761-ce66e8bd136a\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.161952 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs" (OuterVolumeSpecName: "logs") pod "41a45cf5-c43e-4725-b761-ce66e8bd136a" (UID: "41a45cf5-c43e-4725-b761-ce66e8bd136a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.162558 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41a45cf5-c43e-4725-b761-ce66e8bd136a-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.168545 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t" (OuterVolumeSpecName: "kube-api-access-q7r7t") pod "41a45cf5-c43e-4725-b761-ce66e8bd136a" (UID: "41a45cf5-c43e-4725-b761-ce66e8bd136a"). InnerVolumeSpecName "kube-api-access-q7r7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.170866 4688 generic.go:334] "Generic (PLEG): container finished" podID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" containerID="eb2ba3e454a1a4a8f27cf3fe2bfdaf9bc801a47af1cf48d5e230821b716ddf9e" exitCode=137 Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.171020 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57dca142-01fe-4c2d-85ae-4a49bd945a8f","Type":"ContainerDied","Data":"eb2ba3e454a1a4a8f27cf3fe2bfdaf9bc801a47af1cf48d5e230821b716ddf9e"} Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.174629 4688 generic.go:334] "Generic (PLEG): container finished" podID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerID="e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026" exitCode=137 Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.174671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerDied","Data":"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026"} Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.174709 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"41a45cf5-c43e-4725-b761-ce66e8bd136a","Type":"ContainerDied","Data":"2cb147cbc7b823a0b01e4807725b707c201ab165c14f851103f4c970e0b0be4d"} Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.174729 4688 scope.go:117] "RemoveContainer" containerID="e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.174901 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.194585 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data" (OuterVolumeSpecName: "config-data") pod "41a45cf5-c43e-4725-b761-ce66e8bd136a" (UID: "41a45cf5-c43e-4725-b761-ce66e8bd136a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.197150 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a45cf5-c43e-4725-b761-ce66e8bd136a" (UID: "41a45cf5-c43e-4725-b761-ce66e8bd136a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.260003 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.265104 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7r7t\" (UniqueName: \"kubernetes.io/projected/41a45cf5-c43e-4725-b761-ce66e8bd136a-kube-api-access-q7r7t\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.265155 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.265175 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a45cf5-c43e-4725-b761-ce66e8bd136a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.270020 4688 scope.go:117] "RemoveContainer" containerID="a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.304245 4688 scope.go:117] "RemoveContainer" containerID="e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026" Sep 30 17:36:15 crc kubenswrapper[4688]: E0930 17:36:15.304889 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026\": container with ID starting with e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026 not found: ID does not exist" containerID="e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.304967 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026"} err="failed to get container status \"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026\": rpc error: code = NotFound desc = could not find container \"e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026\": container with ID starting with e64958102fc76553536ad0ca116fe734d6e244d3d70d074e56c4530ac8060026 not found: ID does not exist" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.305008 4688 scope.go:117] "RemoveContainer" containerID="a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8" Sep 30 17:36:15 crc kubenswrapper[4688]: E0930 17:36:15.305425 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8\": container with ID starting with a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8 not found: ID does not exist" containerID="a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.305491 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8"} err="failed to get container status \"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8\": rpc error: code = NotFound desc = could not find container \"a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8\": container with ID starting with a46aff5470d1e734825167ec485d9c9487a2326f39016577279b4caf826556c8 not found: ID does not exist" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.366680 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data\") pod \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.366754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk9sl\" (UniqueName: \"kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl\") pod \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.366901 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle\") pod \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\" (UID: \"57dca142-01fe-4c2d-85ae-4a49bd945a8f\") " Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.377589 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl" (OuterVolumeSpecName: "kube-api-access-qk9sl") pod "57dca142-01fe-4c2d-85ae-4a49bd945a8f" (UID: "57dca142-01fe-4c2d-85ae-4a49bd945a8f"). InnerVolumeSpecName "kube-api-access-qk9sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.399667 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data" (OuterVolumeSpecName: "config-data") pod "57dca142-01fe-4c2d-85ae-4a49bd945a8f" (UID: "57dca142-01fe-4c2d-85ae-4a49bd945a8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.419116 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57dca142-01fe-4c2d-85ae-4a49bd945a8f" (UID: "57dca142-01fe-4c2d-85ae-4a49bd945a8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.469957 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.470002 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk9sl\" (UniqueName: \"kubernetes.io/projected/57dca142-01fe-4c2d-85ae-4a49bd945a8f-kube-api-access-qk9sl\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.470016 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57dca142-01fe-4c2d-85ae-4a49bd945a8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.504055 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.520171 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.535376 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:15 crc kubenswrapper[4688]: E0930 17:36:15.538733 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-log" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.538759 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-log" Sep 30 17:36:15 crc kubenswrapper[4688]: E0930 17:36:15.538782 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.538793 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:36:15 crc kubenswrapper[4688]: E0930 17:36:15.538818 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-metadata" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.538826 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-metadata" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.539060 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-log" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.539073 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.539099 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" containerName="nova-metadata-metadata" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.540508 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.543378 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.543673 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.548585 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.674853 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.675003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.675139 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtcfz\" (UniqueName: \"kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.675170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.675204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.776753 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtcfz\" (UniqueName: \"kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.777196 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.777293 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.777424 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.777543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.778219 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.783138 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.783897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.783897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.795224 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtcfz\" (UniqueName: \"kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz\") pod \"nova-metadata-0\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " pod="openstack/nova-metadata-0" Sep 30 17:36:15 crc kubenswrapper[4688]: I0930 17:36:15.908518 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.196031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"57dca142-01fe-4c2d-85ae-4a49bd945a8f","Type":"ContainerDied","Data":"822aa2173b423e6577d3b3a245cdeefd0b6ce52ae3205de04951b22deef34899"} Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.196103 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.196482 4688 scope.go:117] "RemoveContainer" containerID="eb2ba3e454a1a4a8f27cf3fe2bfdaf9bc801a47af1cf48d5e230821b716ddf9e" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.232704 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.251204 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.265275 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.266718 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.273491 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.273605 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.273912 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.278342 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.393729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.393795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.393897 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.393956 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.393997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrkk\" (UniqueName: \"kubernetes.io/projected/ba1f86f8-c725-4de3-89f9-4aed0b031859-kube-api-access-4zrkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.397096 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.497209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.497297 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.498129 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zrkk\" (UniqueName: \"kubernetes.io/projected/ba1f86f8-c725-4de3-89f9-4aed0b031859-kube-api-access-4zrkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.498531 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.498593 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.504072 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.504289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.505001 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.508230 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba1f86f8-c725-4de3-89f9-4aed0b031859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.519735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zrkk\" (UniqueName: \"kubernetes.io/projected/ba1f86f8-c725-4de3-89f9-4aed0b031859-kube-api-access-4zrkk\") pod \"nova-cell1-novncproxy-0\" (UID: \"ba1f86f8-c725-4de3-89f9-4aed0b031859\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:16 crc kubenswrapper[4688]: I0930 17:36:16.600033 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.103488 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 17:36:17 crc kubenswrapper[4688]: W0930 17:36:17.103676 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba1f86f8_c725_4de3_89f9_4aed0b031859.slice/crio-0624e0dbfc8ac2b1cac90bec2ba5453b813be103264dd2b448b38956db842252 WatchSource:0}: Error finding container 0624e0dbfc8ac2b1cac90bec2ba5453b813be103264dd2b448b38956db842252: Status 404 returned error can't find the container with id 0624e0dbfc8ac2b1cac90bec2ba5453b813be103264dd2b448b38956db842252 Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.209536 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ba1f86f8-c725-4de3-89f9-4aed0b031859","Type":"ContainerStarted","Data":"0624e0dbfc8ac2b1cac90bec2ba5453b813be103264dd2b448b38956db842252"} Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.213918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerStarted","Data":"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5"} Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.213981 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerStarted","Data":"0a291e89f83831e82d1ca4f032aef543b8470e69b4d3fdcf0504efda1df6d6ae"} Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.444922 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a45cf5-c43e-4725-b761-ce66e8bd136a" path="/var/lib/kubelet/pods/41a45cf5-c43e-4725-b761-ce66e8bd136a/volumes" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.445653 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57dca142-01fe-4c2d-85ae-4a49bd945a8f" path="/var/lib/kubelet/pods/57dca142-01fe-4c2d-85ae-4a49bd945a8f/volumes" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.614250 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.614889 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.614943 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:36:17 crc kubenswrapper[4688]: I0930 17:36:17.620801 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.227951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ba1f86f8-c725-4de3-89f9-4aed0b031859","Type":"ContainerStarted","Data":"22f873f971bd3aa5fd87063cf4bd06dbb87bd76fcdff20d55957a6205435dcbd"} Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.230075 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerStarted","Data":"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9"} Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.230557 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.241895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.253559 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.253536743 podStartE2EDuration="2.253536743s" podCreationTimestamp="2025-09-30 17:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:18.248519513 +0000 UTC m=+1235.543957071" watchObservedRunningTime="2025-09-30 17:36:18.253536743 +0000 UTC m=+1235.548974301" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.272888 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.272859184 podStartE2EDuration="3.272859184s" podCreationTimestamp="2025-09-30 17:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:18.268943362 +0000 UTC m=+1235.564380940" watchObservedRunningTime="2025-09-30 17:36:18.272859184 +0000 UTC m=+1235.568296742" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.488962 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.491260 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.506771 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.550468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.550552 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6fr\" (UniqueName: \"kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.550622 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.550711 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.550813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.652357 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb6fr\" (UniqueName: \"kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.652433 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.652505 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.652593 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.652636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.653532 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.654228 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.654342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.654549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.704313 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb6fr\" (UniqueName: \"kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr\") pod \"dnsmasq-dns-95bd95597-kqlzq\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:18 crc kubenswrapper[4688]: I0930 17:36:18.819788 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:19 crc kubenswrapper[4688]: I0930 17:36:19.375371 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:36:20 crc kubenswrapper[4688]: I0930 17:36:20.271832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" event={"ID":"3ba20aa0-1673-4180-ae96-ad60a6d8732e","Type":"ContainerStarted","Data":"df95dadf5399e5d26761a73f4a48c7ddc6fd8f77afb29cea268976a5af552b2d"} Sep 30 17:36:20 crc kubenswrapper[4688]: I0930 17:36:20.909704 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:36:20 crc kubenswrapper[4688]: I0930 17:36:20.909783 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.094769 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.281164 4688 generic.go:334] "Generic (PLEG): container finished" podID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerID="84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76" exitCode=0 Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.281237 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" event={"ID":"3ba20aa0-1673-4180-ae96-ad60a6d8732e","Type":"ContainerDied","Data":"84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76"} Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.281392 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-log" containerID="cri-o://ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c" gracePeriod=30 Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.281560 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-api" containerID="cri-o://421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa" gracePeriod=30 Sep 30 17:36:21 crc kubenswrapper[4688]: I0930 17:36:21.600736 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.044448 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.044800 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-central-agent" containerID="cri-o://c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d" gracePeriod=30 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.044990 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="sg-core" containerID="cri-o://694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7" gracePeriod=30 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.045001 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-notification-agent" containerID="cri-o://a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8" gracePeriod=30 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.045342 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" containerID="cri-o://73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84" gracePeriod=30 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.054408 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": EOF" Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.297804 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" event={"ID":"3ba20aa0-1673-4180-ae96-ad60a6d8732e","Type":"ContainerStarted","Data":"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138"} Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.297906 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.309734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerDied","Data":"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84"} Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.309669 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerID="73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84" exitCode=0 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.309830 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerID="694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7" exitCode=2 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.309894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerDied","Data":"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7"} Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.315364 4688 generic.go:334] "Generic (PLEG): container finished" podID="f216c00a-b59a-4679-87e9-2d137c20040f" containerID="ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c" exitCode=143 Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.315429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerDied","Data":"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c"} Sep 30 17:36:22 crc kubenswrapper[4688]: I0930 17:36:22.329417 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" podStartSLOduration=4.329394419 podStartE2EDuration="4.329394419s" podCreationTimestamp="2025-09-30 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:22.322029278 +0000 UTC m=+1239.617466836" watchObservedRunningTime="2025-09-30 17:36:22.329394419 +0000 UTC m=+1239.624831977" Sep 30 17:36:23 crc kubenswrapper[4688]: I0930 17:36:23.327745 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerID="c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d" exitCode=0 Sep 30 17:36:23 crc kubenswrapper[4688]: I0930 17:36:23.327943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerDied","Data":"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d"} Sep 30 17:36:24 crc kubenswrapper[4688]: I0930 17:36:24.590179 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": dial tcp 10.217.0.195:3000: connect: connection refused" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.088769 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.093698 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.115948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle\") pod \"f216c00a-b59a-4679-87e9-2d137c20040f\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117191 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117240 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxt7x\" (UniqueName: \"kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x\") pod \"f216c00a-b59a-4679-87e9-2d137c20040f\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117422 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data\") pod \"f216c00a-b59a-4679-87e9-2d137c20040f\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117462 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117540 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l88x\" (UniqueName: \"kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.117614 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.119450 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs\") pod \"f216c00a-b59a-4679-87e9-2d137c20040f\" (UID: \"f216c00a-b59a-4679-87e9-2d137c20040f\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.120178 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.119824 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.124371 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs" (OuterVolumeSpecName: "logs") pod "f216c00a-b59a-4679-87e9-2d137c20040f" (UID: "f216c00a-b59a-4679-87e9-2d137c20040f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.127255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.127845 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x" (OuterVolumeSpecName: "kube-api-access-6l88x") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "kube-api-access-6l88x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.133959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x" (OuterVolumeSpecName: "kube-api-access-lxt7x") pod "f216c00a-b59a-4679-87e9-2d137c20040f" (UID: "f216c00a-b59a-4679-87e9-2d137c20040f"). InnerVolumeSpecName "kube-api-access-lxt7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.146213 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts\") pod \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\" (UID: \"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8\") " Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.147448 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.147539 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxt7x\" (UniqueName: \"kubernetes.io/projected/f216c00a-b59a-4679-87e9-2d137c20040f-kube-api-access-lxt7x\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.147639 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.147715 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l88x\" (UniqueName: \"kubernetes.io/projected/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-kube-api-access-6l88x\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.147811 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f216c00a-b59a-4679-87e9-2d137c20040f-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.165340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts" (OuterVolumeSpecName: "scripts") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.166687 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.175861 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f216c00a-b59a-4679-87e9-2d137c20040f" (UID: "f216c00a-b59a-4679-87e9-2d137c20040f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.197780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data" (OuterVolumeSpecName: "config-data") pod "f216c00a-b59a-4679-87e9-2d137c20040f" (UID: "f216c00a-b59a-4679-87e9-2d137c20040f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.229462 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.249836 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.249878 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.249888 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.249898 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.249910 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f216c00a-b59a-4679-87e9-2d137c20040f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.262083 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.295005 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data" (OuterVolumeSpecName: "config-data") pod "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" (UID: "0e3a0507-41e0-46e4-9ad4-fab5a3f471c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.319123 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.352596 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.353007 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.357889 4688 generic.go:334] "Generic (PLEG): container finished" podID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerID="a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8" exitCode=0 Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.358033 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerDied","Data":"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8"} Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.358083 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0e3a0507-41e0-46e4-9ad4-fab5a3f471c8","Type":"ContainerDied","Data":"cb0a357a75c546800c3086d007a72a276b9e3f3709021dde0331efb043c8bcd7"} Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.358109 4688 scope.go:117] "RemoveContainer" containerID="73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.358191 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.361168 4688 generic.go:334] "Generic (PLEG): container finished" podID="f216c00a-b59a-4679-87e9-2d137c20040f" containerID="421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa" exitCode=0 Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.361271 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.361292 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.361392 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerDied","Data":"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa"} Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.361476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f216c00a-b59a-4679-87e9-2d137c20040f","Type":"ContainerDied","Data":"d533d4f76584c69ca747f51c2e00b6392fa6915d1aab3b71df78923433c56216"} Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.406474 4688 scope.go:117] "RemoveContainer" containerID="694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.423828 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.470362 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.470413 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.470427 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.473429 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474002 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="sg-core" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474024 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="sg-core" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474076 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474084 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474101 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-notification-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474107 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-notification-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474125 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-central-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474131 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-central-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474149 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-log" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474155 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-log" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.474169 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-api" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474177 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-api" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474360 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-log" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474374 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-notification-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474389 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="ceilometer-central-agent" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474400 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" containerName="nova-api-api" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474408 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="sg-core" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.474423 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" containerName="proxy-httpd" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.476403 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.478534 4688 scope.go:117] "RemoveContainer" containerID="a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.483114 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.483444 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.483469 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.486877 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.495965 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.500874 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.508206 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.512157 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.517228 4688 scope.go:117] "RemoveContainer" containerID="c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.517422 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.518011 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.560857 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z452s\" (UniqueName: \"kubernetes.io/projected/f5f96c12-889c-4c0a-a332-9d58c5fa6526-kube-api-access-z452s\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-log-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561156 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561524 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561754 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-config-data\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561786 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8r54\" (UniqueName: \"kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561843 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.561904 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-run-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.562533 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.562617 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.562699 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.562741 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-scripts\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.562827 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.603230 4688 scope.go:117] "RemoveContainer" containerID="73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.604748 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84\": container with ID starting with 73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84 not found: ID does not exist" containerID="73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.604834 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84"} err="failed to get container status \"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84\": rpc error: code = NotFound desc = could not find container \"73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84\": container with ID starting with 73408d38c88269f431e2a65aedb6f70255b2d0734bdde1bd9afd9f446bbcec84 not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.604879 4688 scope.go:117] "RemoveContainer" containerID="694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.605528 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7\": container with ID starting with 694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7 not found: ID does not exist" containerID="694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.605594 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7"} err="failed to get container status \"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7\": rpc error: code = NotFound desc = could not find container \"694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7\": container with ID starting with 694a21f58266e02c652f97b04502926d3168edd57e8bbd9bcc1c705b063e19a7 not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.605629 4688 scope.go:117] "RemoveContainer" containerID="a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.606490 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8\": container with ID starting with a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8 not found: ID does not exist" containerID="a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.606558 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8"} err="failed to get container status \"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8\": rpc error: code = NotFound desc = could not find container \"a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8\": container with ID starting with a65ea36b432599defb72d5aba482fc02c9268fea0f52c46b7b1ae94153331ca8 not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.606625 4688 scope.go:117] "RemoveContainer" containerID="c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.607081 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d\": container with ID starting with c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d not found: ID does not exist" containerID="c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.607123 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d"} err="failed to get container status \"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d\": rpc error: code = NotFound desc = could not find container \"c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d\": container with ID starting with c3d74e2f3b30b2a6ddd44847c3a29f4cdd17ea10e847a159a01c0bf505b4c22d not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.607148 4688 scope.go:117] "RemoveContainer" containerID="421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.635397 4688 scope.go:117] "RemoveContainer" containerID="ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-log-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664599 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664627 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664667 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-config-data\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664687 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8r54\" (UniqueName: \"kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664713 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-run-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664800 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664819 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664850 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664871 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-scripts\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.664913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z452s\" (UniqueName: \"kubernetes.io/projected/f5f96c12-889c-4c0a-a332-9d58c5fa6526-kube-api-access-z452s\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.665110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-log-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.665541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5f96c12-889c-4c0a-a332-9d58c5fa6526-run-httpd\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.666186 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.671262 4688 scope.go:117] "RemoveContainer" containerID="421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.674401 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.676015 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa\": container with ID starting with 421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa not found: ID does not exist" containerID="421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.676096 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa"} err="failed to get container status \"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa\": rpc error: code = NotFound desc = could not find container \"421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa\": container with ID starting with 421e11cbc3fa77e43afd7c3fa82a667d6a6921ca6bc109b1cc331da10664caaa not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.676139 4688 scope.go:117] "RemoveContainer" containerID="ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c" Sep 30 17:36:25 crc kubenswrapper[4688]: E0930 17:36:25.676710 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c\": container with ID starting with ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c not found: ID does not exist" containerID="ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.677052 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c"} err="failed to get container status \"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c\": rpc error: code = NotFound desc = could not find container \"ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c\": container with ID starting with ec9f8348d5fdabedfad9e5594e817f51adec070bc5be7de6fe3130e720e3ad8c not found: ID does not exist" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.678733 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.678925 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.678977 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.679102 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-config-data\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.679536 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.680437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.689417 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-scripts\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.690287 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f96c12-889c-4c0a-a332-9d58c5fa6526-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.693216 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8r54\" (UniqueName: \"kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54\") pod \"nova-api-0\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.693465 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z452s\" (UniqueName: \"kubernetes.io/projected/f5f96c12-889c-4c0a-a332-9d58c5fa6526-kube-api-access-z452s\") pod \"ceilometer-0\" (UID: \"f5f96c12-889c-4c0a-a332-9d58c5fa6526\") " pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.890704 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.902083 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.909054 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:36:25 crc kubenswrapper[4688]: I0930 17:36:25.909473 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.411999 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 17:36:26 crc kubenswrapper[4688]: W0930 17:36:26.419971 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5f96c12_889c_4c0a_a332_9d58c5fa6526.slice/crio-8f16388694f6eaa193c733d5bc223dea11812baafa2f603a5a81ba5e71a65945 WatchSource:0}: Error finding container 8f16388694f6eaa193c733d5bc223dea11812baafa2f603a5a81ba5e71a65945: Status 404 returned error can't find the container with id 8f16388694f6eaa193c733d5bc223dea11812baafa2f603a5a81ba5e71a65945 Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.425333 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.506090 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.600707 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.622233 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.922773 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:26 crc kubenswrapper[4688]: I0930 17:36:26.923331 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.393923 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerStarted","Data":"b49e537dea620b31736e535f1e17f5fe94908b3176425818df0d7386959d5c56"} Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.393983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerStarted","Data":"c971d11a5e59ce9361e970b498650bbddbf435e3189eb87cc0ae1ed80a597e85"} Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.393997 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerStarted","Data":"c3648679f2fd7608444bf70bda9fc9567d32e5c5a5fd304eb7b4879c58c42025"} Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.400423 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5f96c12-889c-4c0a-a332-9d58c5fa6526","Type":"ContainerStarted","Data":"84969bb8433378906aee5b1c115f37b670744e0d1cf18fafca2222d9c3774ee5"} Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.400487 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5f96c12-889c-4c0a-a332-9d58c5fa6526","Type":"ContainerStarted","Data":"8f16388694f6eaa193c733d5bc223dea11812baafa2f603a5a81ba5e71a65945"} Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.427230 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.427198937 podStartE2EDuration="2.427198937s" podCreationTimestamp="2025-09-30 17:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:27.416847669 +0000 UTC m=+1244.712285227" watchObservedRunningTime="2025-09-30 17:36:27.427198937 +0000 UTC m=+1244.722636495" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.441537 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3a0507-41e0-46e4-9ad4-fab5a3f471c8" path="/var/lib/kubelet/pods/0e3a0507-41e0-46e4-9ad4-fab5a3f471c8/volumes" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.442801 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f216c00a-b59a-4679-87e9-2d137c20040f" path="/var/lib/kubelet/pods/f216c00a-b59a-4679-87e9-2d137c20040f/volumes" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.443560 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.598331 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7pwh8"] Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.603292 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.606907 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.607019 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.609522 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7pwh8"] Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.727853 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.728023 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95qvb\" (UniqueName: \"kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.728067 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.728133 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.830359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95qvb\" (UniqueName: \"kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.830842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.830913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.830997 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.836532 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.838114 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.851532 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.859895 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95qvb\" (UniqueName: \"kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb\") pod \"nova-cell1-cell-mapping-7pwh8\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:27 crc kubenswrapper[4688]: I0930 17:36:27.932223 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:28 crc kubenswrapper[4688]: I0930 17:36:28.440208 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7pwh8"] Sep 30 17:36:28 crc kubenswrapper[4688]: I0930 17:36:28.821856 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:36:28 crc kubenswrapper[4688]: I0930 17:36:28.908023 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:36:28 crc kubenswrapper[4688]: I0930 17:36:28.908798 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="dnsmasq-dns" containerID="cri-o://ae94cef2c2f4f29c0e995b19088286109f415433a9afb44800dacbe69f7210d5" gracePeriod=10 Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.222631 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.448609 4688 generic.go:334] "Generic (PLEG): container finished" podID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerID="ae94cef2c2f4f29c0e995b19088286109f415433a9afb44800dacbe69f7210d5" exitCode=0 Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.466002 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" event={"ID":"fed8bea4-757d-495a-8740-e46ad74b59e5","Type":"ContainerDied","Data":"ae94cef2c2f4f29c0e995b19088286109f415433a9afb44800dacbe69f7210d5"} Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.466073 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5f96c12-889c-4c0a-a332-9d58c5fa6526","Type":"ContainerStarted","Data":"928fff8fbc61d47a6bfee42e68af16024038742231360334fe21c9e34bd21bb6"} Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.466096 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7pwh8" event={"ID":"17944330-bf7d-4ae7-bb30-588fc5c65b3f","Type":"ContainerStarted","Data":"54bd07146cb7cc20ae2c4b34620108e0dd35838257e7dc889bb050b6dbb12d03"} Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.466112 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7pwh8" event={"ID":"17944330-bf7d-4ae7-bb30-588fc5c65b3f","Type":"ContainerStarted","Data":"faea124a69100d1a82c501f8cd09bcb1fd62afc9bd9104d7182f5cfe263844ce"} Sep 30 17:36:29 crc kubenswrapper[4688]: I0930 17:36:29.486858 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7pwh8" podStartSLOduration=2.486836257 podStartE2EDuration="2.486836257s" podCreationTimestamp="2025-09-30 17:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:29.48542596 +0000 UTC m=+1246.780863528" watchObservedRunningTime="2025-09-30 17:36:29.486836257 +0000 UTC m=+1246.782273815" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.046971 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.192394 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb\") pod \"fed8bea4-757d-495a-8740-e46ad74b59e5\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.192458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc\") pod \"fed8bea4-757d-495a-8740-e46ad74b59e5\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.192776 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config\") pod \"fed8bea4-757d-495a-8740-e46ad74b59e5\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.192820 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf298\" (UniqueName: \"kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298\") pod \"fed8bea4-757d-495a-8740-e46ad74b59e5\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.192848 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb\") pod \"fed8bea4-757d-495a-8740-e46ad74b59e5\" (UID: \"fed8bea4-757d-495a-8740-e46ad74b59e5\") " Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.200746 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298" (OuterVolumeSpecName: "kube-api-access-zf298") pod "fed8bea4-757d-495a-8740-e46ad74b59e5" (UID: "fed8bea4-757d-495a-8740-e46ad74b59e5"). InnerVolumeSpecName "kube-api-access-zf298". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.253532 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fed8bea4-757d-495a-8740-e46ad74b59e5" (UID: "fed8bea4-757d-495a-8740-e46ad74b59e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.255321 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config" (OuterVolumeSpecName: "config") pod "fed8bea4-757d-495a-8740-e46ad74b59e5" (UID: "fed8bea4-757d-495a-8740-e46ad74b59e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.271461 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fed8bea4-757d-495a-8740-e46ad74b59e5" (UID: "fed8bea4-757d-495a-8740-e46ad74b59e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.281942 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fed8bea4-757d-495a-8740-e46ad74b59e5" (UID: "fed8bea4-757d-495a-8740-e46ad74b59e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.296200 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.296248 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.296306 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf298\" (UniqueName: \"kubernetes.io/projected/fed8bea4-757d-495a-8740-e46ad74b59e5-kube-api-access-zf298\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.296317 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.296327 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed8bea4-757d-495a-8740-e46ad74b59e5-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.398748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:36:30 crc kubenswrapper[4688]: E0930 17:36:30.398987 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:36:30 crc kubenswrapper[4688]: E0930 17:36:30.399047 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:36:30 crc kubenswrapper[4688]: E0930 17:36:30.399122 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:38:32.399094597 +0000 UTC m=+1369.694532165 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.490280 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.490709 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d99cc479-54c4g" event={"ID":"fed8bea4-757d-495a-8740-e46ad74b59e5","Type":"ContainerDied","Data":"41a0688024ccfa5e4f65a6a35133909b4a6fe3187f65e14b7912eda5bbb5adc0"} Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.490802 4688 scope.go:117] "RemoveContainer" containerID="ae94cef2c2f4f29c0e995b19088286109f415433a9afb44800dacbe69f7210d5" Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.566811 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.575539 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56d99cc479-54c4g"] Sep 30 17:36:30 crc kubenswrapper[4688]: I0930 17:36:30.617333 4688 scope.go:117] "RemoveContainer" containerID="10bcc35e7974b9ae6756a76393c538d2f56e94eed87403b160236cedc50d7c7c" Sep 30 17:36:31 crc kubenswrapper[4688]: I0930 17:36:31.456537 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" path="/var/lib/kubelet/pods/fed8bea4-757d-495a-8740-e46ad74b59e5/volumes" Sep 30 17:36:31 crc kubenswrapper[4688]: I0930 17:36:31.504075 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5f96c12-889c-4c0a-a332-9d58c5fa6526","Type":"ContainerStarted","Data":"cccff30c9b10316070a8420577a75a199a9ade76cf3e217d1df08630fc9fba97"} Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.547122 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5f96c12-889c-4c0a-a332-9d58c5fa6526","Type":"ContainerStarted","Data":"5d8abc6a42d74acedd80dd6a53e1fd42389980595e23e05fd2f3aafb2020638e"} Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.547808 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.549708 4688 generic.go:334] "Generic (PLEG): container finished" podID="17944330-bf7d-4ae7-bb30-588fc5c65b3f" containerID="54bd07146cb7cc20ae2c4b34620108e0dd35838257e7dc889bb050b6dbb12d03" exitCode=0 Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.549762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7pwh8" event={"ID":"17944330-bf7d-4ae7-bb30-588fc5c65b3f","Type":"ContainerDied","Data":"54bd07146cb7cc20ae2c4b34620108e0dd35838257e7dc889bb050b6dbb12d03"} Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.589221 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.705072173 podStartE2EDuration="10.589199554s" podCreationTimestamp="2025-09-30 17:36:25 +0000 UTC" firstStartedPulling="2025-09-30 17:36:26.42513554 +0000 UTC m=+1243.720573098" lastFinishedPulling="2025-09-30 17:36:34.309262921 +0000 UTC m=+1251.604700479" observedRunningTime="2025-09-30 17:36:35.567970835 +0000 UTC m=+1252.863408413" watchObservedRunningTime="2025-09-30 17:36:35.589199554 +0000 UTC m=+1252.884637112" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.902893 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.902953 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.915591 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.924583 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:36:35 crc kubenswrapper[4688]: I0930 17:36:35.928299 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:36:36 crc kubenswrapper[4688]: I0930 17:36:36.569154 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:36:36 crc kubenswrapper[4688]: I0930 17:36:36.920990 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:36 crc kubenswrapper[4688]: I0930 17:36:36.920990 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.026500 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.167823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95qvb\" (UniqueName: \"kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb\") pod \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.168120 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data\") pod \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.168242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts\") pod \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.168364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle\") pod \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\" (UID: \"17944330-bf7d-4ae7-bb30-588fc5c65b3f\") " Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.188473 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts" (OuterVolumeSpecName: "scripts") pod "17944330-bf7d-4ae7-bb30-588fc5c65b3f" (UID: "17944330-bf7d-4ae7-bb30-588fc5c65b3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.188694 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb" (OuterVolumeSpecName: "kube-api-access-95qvb") pod "17944330-bf7d-4ae7-bb30-588fc5c65b3f" (UID: "17944330-bf7d-4ae7-bb30-588fc5c65b3f"). InnerVolumeSpecName "kube-api-access-95qvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.218013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data" (OuterVolumeSpecName: "config-data") pod "17944330-bf7d-4ae7-bb30-588fc5c65b3f" (UID: "17944330-bf7d-4ae7-bb30-588fc5c65b3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.233068 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17944330-bf7d-4ae7-bb30-588fc5c65b3f" (UID: "17944330-bf7d-4ae7-bb30-588fc5c65b3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.271389 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.271695 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.271707 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17944330-bf7d-4ae7-bb30-588fc5c65b3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.271721 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95qvb\" (UniqueName: \"kubernetes.io/projected/17944330-bf7d-4ae7-bb30-588fc5c65b3f-kube-api-access-95qvb\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.577250 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7pwh8" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.577282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7pwh8" event={"ID":"17944330-bf7d-4ae7-bb30-588fc5c65b3f","Type":"ContainerDied","Data":"faea124a69100d1a82c501f8cd09bcb1fd62afc9bd9104d7182f5cfe263844ce"} Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.577361 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faea124a69100d1a82c501f8cd09bcb1fd62afc9bd9104d7182f5cfe263844ce" Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.784555 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.784887 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-log" containerID="cri-o://c971d11a5e59ce9361e970b498650bbddbf435e3189eb87cc0ae1ed80a597e85" gracePeriod=30 Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.784970 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-api" containerID="cri-o://b49e537dea620b31736e535f1e17f5fe94908b3176425818df0d7386959d5c56" gracePeriod=30 Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.805654 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.805947 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" containerID="cri-o://938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" gracePeriod=30 Sep 30 17:36:37 crc kubenswrapper[4688]: I0930 17:36:37.846024 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:38 crc kubenswrapper[4688]: E0930 17:36:38.321139 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:38 crc kubenswrapper[4688]: E0930 17:36:38.323727 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:38 crc kubenswrapper[4688]: E0930 17:36:38.325302 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:38 crc kubenswrapper[4688]: E0930 17:36:38.325342 4688 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" Sep 30 17:36:38 crc kubenswrapper[4688]: I0930 17:36:38.594865 4688 generic.go:334] "Generic (PLEG): container finished" podID="96145437-c239-4ce7-9cc8-8315775ad266" containerID="c971d11a5e59ce9361e970b498650bbddbf435e3189eb87cc0ae1ed80a597e85" exitCode=143 Sep 30 17:36:38 crc kubenswrapper[4688]: I0930 17:36:38.595791 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerDied","Data":"c971d11a5e59ce9361e970b498650bbddbf435e3189eb87cc0ae1ed80a597e85"} Sep 30 17:36:39 crc kubenswrapper[4688]: I0930 17:36:39.603733 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" containerID="cri-o://feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5" gracePeriod=30 Sep 30 17:36:39 crc kubenswrapper[4688]: I0930 17:36:39.603787 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" containerID="cri-o://91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9" gracePeriod=30 Sep 30 17:36:40 crc kubenswrapper[4688]: I0930 17:36:40.630801 4688 generic.go:334] "Generic (PLEG): container finished" podID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerID="feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5" exitCode=143 Sep 30 17:36:40 crc kubenswrapper[4688]: I0930 17:36:40.630853 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerDied","Data":"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5"} Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.655673 4688 generic.go:334] "Generic (PLEG): container finished" podID="96145437-c239-4ce7-9cc8-8315775ad266" containerID="b49e537dea620b31736e535f1e17f5fe94908b3176425818df0d7386959d5c56" exitCode=0 Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.656024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerDied","Data":"b49e537dea620b31736e535f1e17f5fe94908b3176425818df0d7386959d5c56"} Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.656339 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96145437-c239-4ce7-9cc8-8315775ad266","Type":"ContainerDied","Data":"c3648679f2fd7608444bf70bda9fc9567d32e5c5a5fd304eb7b4879c58c42025"} Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.656363 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3648679f2fd7608444bf70bda9fc9567d32e5c5a5fd304eb7b4879c58c42025" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.728483 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.755817 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:60988->10.217.0.199:8775: read: connection reset by peer" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.756219 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:32768->10.217.0.199:8775: read: connection reset by peer" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.799756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.799836 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.799996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.800065 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.800099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.800222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8r54\" (UniqueName: \"kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54\") pod \"96145437-c239-4ce7-9cc8-8315775ad266\" (UID: \"96145437-c239-4ce7-9cc8-8315775ad266\") " Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.801214 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs" (OuterVolumeSpecName: "logs") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.806641 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54" (OuterVolumeSpecName: "kube-api-access-k8r54") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "kube-api-access-k8r54". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.833235 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.837981 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data" (OuterVolumeSpecName: "config-data") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.858784 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.871680 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "96145437-c239-4ce7-9cc8-8315775ad266" (UID: "96145437-c239-4ce7-9cc8-8315775ad266"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902885 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902923 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96145437-c239-4ce7-9cc8-8315775ad266-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902933 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8r54\" (UniqueName: \"kubernetes.io/projected/96145437-c239-4ce7-9cc8-8315775ad266-kube-api-access-k8r54\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902944 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902954 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:42 crc kubenswrapper[4688]: I0930 17:36:42.902964 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96145437-c239-4ce7-9cc8-8315775ad266-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.170264 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.311942 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs\") pod \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.312299 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtcfz\" (UniqueName: \"kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz\") pod \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.312345 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data\") pod \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.312362 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle\") pod \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.312424 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs\") pod \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\" (UID: \"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.313345 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs" (OuterVolumeSpecName: "logs") pod "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" (UID: "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.318211 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz" (OuterVolumeSpecName: "kube-api-access-qtcfz") pod "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" (UID: "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f"). InnerVolumeSpecName "kube-api-access-qtcfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.318383 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 is running failed: container process not found" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.318773 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 is running failed: container process not found" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.319839 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 is running failed: container process not found" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.319931 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.342326 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" (UID: "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.345843 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data" (OuterVolumeSpecName: "config-data") pod "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" (UID: "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.392980 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" (UID: "5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.414695 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.414728 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-logs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.414737 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtcfz\" (UniqueName: \"kubernetes.io/projected/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-kube-api-access-qtcfz\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.414747 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.414757 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.467113 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.517846 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t42fs\" (UniqueName: \"kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs\") pod \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.518376 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data\") pod \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.518466 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle\") pod \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\" (UID: \"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959\") " Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.532970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs" (OuterVolumeSpecName: "kube-api-access-t42fs") pod "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" (UID: "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959"). InnerVolumeSpecName "kube-api-access-t42fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.550794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data" (OuterVolumeSpecName: "config-data") pod "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" (UID: "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.551506 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" (UID: "8dd64c0e-42f4-4fcd-94bc-0e067c4b7959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.621541 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t42fs\" (UniqueName: \"kubernetes.io/projected/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-kube-api-access-t42fs\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.621652 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.621662 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.682048 4688 generic.go:334] "Generic (PLEG): container finished" podID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" exitCode=0 Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.682190 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.682138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959","Type":"ContainerDied","Data":"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310"} Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.682349 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8dd64c0e-42f4-4fcd-94bc-0e067c4b7959","Type":"ContainerDied","Data":"e72c313dfb4272b7eea7b0725290867bb7d8c7ff7fe8617da8e3cc4681ce1374"} Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.682403 4688 scope.go:117] "RemoveContainer" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.690804 4688 generic.go:334] "Generic (PLEG): container finished" podID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerID="91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9" exitCode=0 Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.690905 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.691003 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.691080 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerDied","Data":"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9"} Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.691160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f","Type":"ContainerDied","Data":"0a291e89f83831e82d1ca4f032aef543b8470e69b4d3fdcf0504efda1df6d6ae"} Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.721318 4688 scope.go:117] "RemoveContainer" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.723055 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310\": container with ID starting with 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 not found: ID does not exist" containerID="938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.723095 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310"} err="failed to get container status \"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310\": rpc error: code = NotFound desc = could not find container \"938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310\": container with ID starting with 938ca5a8b5f2307dfc6d47131814d3b9587fffa0e6ba163192483c62cc0e9310 not found: ID does not exist" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.723119 4688 scope.go:117] "RemoveContainer" containerID="91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.743697 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.761721 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.762609 4688 scope.go:117] "RemoveContainer" containerID="feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.782678 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.803157 4688 scope.go:117] "RemoveContainer" containerID="91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.816476 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9\": container with ID starting with 91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9 not found: ID does not exist" containerID="91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.816562 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9"} err="failed to get container status \"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9\": rpc error: code = NotFound desc = could not find container \"91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9\": container with ID starting with 91101023793e6d4136ca8bc548d6d8728c801e7768e10ba896e20f63e754e4d9 not found: ID does not exist" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.816721 4688 scope.go:117] "RemoveContainer" containerID="feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.817325 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5\": container with ID starting with feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5 not found: ID does not exist" containerID="feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.817356 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5"} err="failed to get container status \"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5\": rpc error: code = NotFound desc = could not find container \"feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5\": container with ID starting with feb145062b68e71d767e338215844904530402eb46e6ac6adc29334102816ec5 not found: ID does not exist" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.819757 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.820885 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-api" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.820917 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-api" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.820958 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.820968 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.820977 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.820985 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.821014 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="dnsmasq-dns" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821022 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="dnsmasq-dns" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.821040 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17944330-bf7d-4ae7-bb30-588fc5c65b3f" containerName="nova-manage" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821050 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17944330-bf7d-4ae7-bb30-588fc5c65b3f" containerName="nova-manage" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.821066 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-log" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821074 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-log" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.821087 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="init" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821095 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="init" Sep 30 17:36:43 crc kubenswrapper[4688]: E0930 17:36:43.821120 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821128 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821699 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-api" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821730 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-log" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821757 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" containerName="nova-metadata-metadata" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821779 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fed8bea4-757d-495a-8740-e46ad74b59e5" containerName="dnsmasq-dns" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821806 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="96145437-c239-4ce7-9cc8-8315775ad266" containerName="nova-api-log" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821830 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="17944330-bf7d-4ae7-bb30-588fc5c65b3f" containerName="nova-manage" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.821848 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" containerName="nova-scheduler-scheduler" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.824111 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.828177 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.828475 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.846829 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.865591 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.883829 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.894236 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.901744 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.904345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.908618 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.909060 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.909330 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.912389 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.913949 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.916356 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.926428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.940733 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.940803 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss77s\" (UniqueName: \"kubernetes.io/projected/61afa819-98c6-408a-b62b-9098624f6e38-kube-api-access-ss77s\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.940965 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941026 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-config-data\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61afa819-98c6-408a-b62b-9098624f6e38-logs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941203 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941767 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d423914e-2c66-43fb-984f-8936b0c1e5af-logs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-config-data\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-public-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.941950 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.942034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4v78\" (UniqueName: \"kubernetes.io/projected/d423914e-2c66-43fb-984f-8936b0c1e5af-kube-api-access-x4v78\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:43 crc kubenswrapper[4688]: I0930 17:36:43.942853 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043698 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kdp\" (UniqueName: \"kubernetes.io/projected/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-kube-api-access-62kdp\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d423914e-2c66-43fb-984f-8936b0c1e5af-logs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043843 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-config-data\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-public-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4v78\" (UniqueName: \"kubernetes.io/projected/d423914e-2c66-43fb-984f-8936b0c1e5af-kube-api-access-x4v78\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.043976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-config-data\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044037 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss77s\" (UniqueName: \"kubernetes.io/projected/61afa819-98c6-408a-b62b-9098624f6e38-kube-api-access-ss77s\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-config-data\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044129 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61afa819-98c6-408a-b62b-9098624f6e38-logs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.044483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d423914e-2c66-43fb-984f-8936b0c1e5af-logs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.045250 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61afa819-98c6-408a-b62b-9098624f6e38-logs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.048263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.048278 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.048793 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-config-data\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.049836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.050019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.050329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61afa819-98c6-408a-b62b-9098624f6e38-public-tls-certs\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.051405 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d423914e-2c66-43fb-984f-8936b0c1e5af-config-data\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.068493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss77s\" (UniqueName: \"kubernetes.io/projected/61afa819-98c6-408a-b62b-9098624f6e38-kube-api-access-ss77s\") pod \"nova-api-0\" (UID: \"61afa819-98c6-408a-b62b-9098624f6e38\") " pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.068691 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4v78\" (UniqueName: \"kubernetes.io/projected/d423914e-2c66-43fb-984f-8936b0c1e5af-kube-api-access-x4v78\") pod \"nova-metadata-0\" (UID: \"d423914e-2c66-43fb-984f-8936b0c1e5af\") " pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.146287 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-config-data\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.146717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.146859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kdp\" (UniqueName: \"kubernetes.io/projected/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-kube-api-access-62kdp\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.150709 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.151052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.151845 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-config-data\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.166720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kdp\" (UniqueName: \"kubernetes.io/projected/4967d3fb-572e-4d96-85f2-7f3ef31acc3b-kube-api-access-62kdp\") pod \"nova-scheduler-0\" (UID: \"4967d3fb-572e-4d96-85f2-7f3ef31acc3b\") " pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.237388 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.241526 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.705031 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 17:36:44 crc kubenswrapper[4688]: W0930 17:36:44.710447 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd423914e_2c66_43fb_984f_8936b0c1e5af.slice/crio-d89822b710c61bf7f35757348f260fb841d6f7c09318a19a8f978b3bd8c201f5 WatchSource:0}: Error finding container d89822b710c61bf7f35757348f260fb841d6f7c09318a19a8f978b3bd8c201f5: Status 404 returned error can't find the container with id d89822b710c61bf7f35757348f260fb841d6f7c09318a19a8f978b3bd8c201f5 Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.773427 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 17:36:44 crc kubenswrapper[4688]: W0930 17:36:44.777279 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4967d3fb_572e_4d96_85f2_7f3ef31acc3b.slice/crio-535d20f7d7e0fb2a038171f4c4ce3fb424fe052611ecab6387e5e1f24065f9b2 WatchSource:0}: Error finding container 535d20f7d7e0fb2a038171f4c4ce3fb424fe052611ecab6387e5e1f24065f9b2: Status 404 returned error can't find the container with id 535d20f7d7e0fb2a038171f4c4ce3fb424fe052611ecab6387e5e1f24065f9b2 Sep 30 17:36:44 crc kubenswrapper[4688]: I0930 17:36:44.844992 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 17:36:44 crc kubenswrapper[4688]: W0930 17:36:44.849901 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61afa819_98c6_408a_b62b_9098624f6e38.slice/crio-5f58a330ba6c2f7358d2878a4f333868576761909f8829a99d6f5fd8179c3433 WatchSource:0}: Error finding container 5f58a330ba6c2f7358d2878a4f333868576761909f8829a99d6f5fd8179c3433: Status 404 returned error can't find the container with id 5f58a330ba6c2f7358d2878a4f333868576761909f8829a99d6f5fd8179c3433 Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.442803 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f" path="/var/lib/kubelet/pods/5ad404c9-f2f9-4905-af5d-0f00b6ab2b1f/volumes" Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.444624 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dd64c0e-42f4-4fcd-94bc-0e067c4b7959" path="/var/lib/kubelet/pods/8dd64c0e-42f4-4fcd-94bc-0e067c4b7959/volumes" Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.446182 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96145437-c239-4ce7-9cc8-8315775ad266" path="/var/lib/kubelet/pods/96145437-c239-4ce7-9cc8-8315775ad266/volumes" Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.726675 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61afa819-98c6-408a-b62b-9098624f6e38","Type":"ContainerStarted","Data":"72d2c8b4b8e0cfee3bf95b87ae1e17460d166ac5d8f2c4fa5467ed3e3728db22"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.727056 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61afa819-98c6-408a-b62b-9098624f6e38","Type":"ContainerStarted","Data":"f5ccd856f53f1b54f80c2320d3b542f5e284d28c501fe38e67a34ce7a2fe4894"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.727072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61afa819-98c6-408a-b62b-9098624f6e38","Type":"ContainerStarted","Data":"5f58a330ba6c2f7358d2878a4f333868576761909f8829a99d6f5fd8179c3433"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.731090 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4967d3fb-572e-4d96-85f2-7f3ef31acc3b","Type":"ContainerStarted","Data":"39afa1572c5135576fdfc0226615a1a10f7854449a6e64e902c267a4bb8913a7"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.731132 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4967d3fb-572e-4d96-85f2-7f3ef31acc3b","Type":"ContainerStarted","Data":"535d20f7d7e0fb2a038171f4c4ce3fb424fe052611ecab6387e5e1f24065f9b2"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.735762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d423914e-2c66-43fb-984f-8936b0c1e5af","Type":"ContainerStarted","Data":"4c69d92430ac430a6e0994a7cb4b1fdffaf95e775402e3849eeaa8554d2f8791"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.735824 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d423914e-2c66-43fb-984f-8936b0c1e5af","Type":"ContainerStarted","Data":"997c92a5cdf73171029adc921f08565e24249629a8d92283ee543ee02f781da6"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.735839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d423914e-2c66-43fb-984f-8936b0c1e5af","Type":"ContainerStarted","Data":"d89822b710c61bf7f35757348f260fb841d6f7c09318a19a8f978b3bd8c201f5"} Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.763724 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.763674023 podStartE2EDuration="2.763674023s" podCreationTimestamp="2025-09-30 17:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:45.748172601 +0000 UTC m=+1263.043610179" watchObservedRunningTime="2025-09-30 17:36:45.763674023 +0000 UTC m=+1263.059111581" Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.797454 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.797432817 podStartE2EDuration="2.797432817s" podCreationTimestamp="2025-09-30 17:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:45.77860281 +0000 UTC m=+1263.074040368" watchObservedRunningTime="2025-09-30 17:36:45.797432817 +0000 UTC m=+1263.092870375" Sep 30 17:36:45 crc kubenswrapper[4688]: I0930 17:36:45.804000 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.803982867 podStartE2EDuration="2.803982867s" podCreationTimestamp="2025-09-30 17:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:36:45.796779511 +0000 UTC m=+1263.092217099" watchObservedRunningTime="2025-09-30 17:36:45.803982867 +0000 UTC m=+1263.099420425" Sep 30 17:36:49 crc kubenswrapper[4688]: I0930 17:36:49.151725 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:36:49 crc kubenswrapper[4688]: I0930 17:36:49.153744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 17:36:49 crc kubenswrapper[4688]: I0930 17:36:49.242173 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 17:36:52 crc kubenswrapper[4688]: I0930 17:36:52.545874 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:36:52 crc kubenswrapper[4688]: I0930 17:36:52.546194 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.151946 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.153043 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.240878 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.240984 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.242735 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.297220 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 17:36:54 crc kubenswrapper[4688]: I0930 17:36:54.871174 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 17:36:55 crc kubenswrapper[4688]: I0930 17:36:55.169865 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d423914e-2c66-43fb-984f-8936b0c1e5af" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:55 crc kubenswrapper[4688]: I0930 17:36:55.169890 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d423914e-2c66-43fb-984f-8936b0c1e5af" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:55 crc kubenswrapper[4688]: I0930 17:36:55.290853 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61afa819-98c6-408a-b62b-9098624f6e38" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:55 crc kubenswrapper[4688]: I0930 17:36:55.290987 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61afa819-98c6-408a-b62b-9098624f6e38" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 17:36:55 crc kubenswrapper[4688]: I0930 17:36:55.907541 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.158251 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.159978 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.168498 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.169104 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.263033 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.264200 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.268163 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.269092 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.955795 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 17:37:04 crc kubenswrapper[4688]: I0930 17:37:04.965543 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 17:37:22 crc kubenswrapper[4688]: I0930 17:37:22.545547 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:37:22 crc kubenswrapper[4688]: I0930 17:37:22.547167 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:37:46 crc kubenswrapper[4688]: I0930 17:37:46.217738 4688 scope.go:117] "RemoveContainer" containerID="d2fd4a2f3bf3a07ba713bde55401da324d1aadafad23a499e98a1d83b3e8711c" Sep 30 17:37:52 crc kubenswrapper[4688]: I0930 17:37:52.545854 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:37:52 crc kubenswrapper[4688]: I0930 17:37:52.547814 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:37:52 crc kubenswrapper[4688]: I0930 17:37:52.547933 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:37:52 crc kubenswrapper[4688]: I0930 17:37:52.548872 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:37:52 crc kubenswrapper[4688]: I0930 17:37:52.549022 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc" gracePeriod=600 Sep 30 17:37:53 crc kubenswrapper[4688]: I0930 17:37:53.406274 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc" exitCode=0 Sep 30 17:37:53 crc kubenswrapper[4688]: I0930 17:37:53.406358 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc"} Sep 30 17:37:53 crc kubenswrapper[4688]: I0930 17:37:53.406943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0"} Sep 30 17:37:53 crc kubenswrapper[4688]: I0930 17:37:53.406975 4688 scope.go:117] "RemoveContainer" containerID="60da3ee0a4d40156f3d5a2c83b4b356b77441efafed42128600f215199dc7b8d" Sep 30 17:37:59 crc kubenswrapper[4688]: E0930 17:37:59.917890 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:38:00 crc kubenswrapper[4688]: I0930 17:38:00.478212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:38:01 crc kubenswrapper[4688]: I0930 17:38:01.300400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:38:01 crc kubenswrapper[4688]: E0930 17:38:01.300733 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:38:01 crc kubenswrapper[4688]: E0930 17:38:01.300787 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:38:01 crc kubenswrapper[4688]: E0930 17:38:01.300888 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:40:03.300860399 +0000 UTC m=+1460.596298177 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:38:28 crc kubenswrapper[4688]: E0930 17:38:28.363715 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" Sep 30 17:38:28 crc kubenswrapper[4688]: I0930 17:38:28.784032 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:38:32 crc kubenswrapper[4688]: I0930 17:38:32.476666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:38:32 crc kubenswrapper[4688]: E0930 17:38:32.477126 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:38:32 crc kubenswrapper[4688]: E0930 17:38:32.477933 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:38:32 crc kubenswrapper[4688]: E0930 17:38:32.478055 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:40:34.47801645 +0000 UTC m=+1491.773454018 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:38:46 crc kubenswrapper[4688]: I0930 17:38:46.273381 4688 scope.go:117] "RemoveContainer" containerID="a555da79ecda92e047f32300c49a2e5e6cafa641eb8727fbd2e02278c351e4c7" Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.805160 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.808450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.818639 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.957181 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.957355 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:49 crc kubenswrapper[4688]: I0930 17:38:49.957378 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bg4\" (UniqueName: \"kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.059412 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.059644 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.059696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bg4\" (UniqueName: \"kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.060191 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.060274 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.081972 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bg4\" (UniqueName: \"kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4\") pod \"redhat-operators-qstbj\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.136074 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:38:50 crc kubenswrapper[4688]: I0930 17:38:50.683272 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:38:51 crc kubenswrapper[4688]: I0930 17:38:51.018535 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerID="beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9" exitCode=0 Sep 30 17:38:51 crc kubenswrapper[4688]: I0930 17:38:51.018624 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerDied","Data":"beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9"} Sep 30 17:38:51 crc kubenswrapper[4688]: I0930 17:38:51.018977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerStarted","Data":"132676a6fc472c21ea2c384b9bd019382a7b24de8fa7580450d59c6cc997eb38"} Sep 30 17:38:53 crc kubenswrapper[4688]: I0930 17:38:53.042100 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerID="cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded" exitCode=0 Sep 30 17:38:53 crc kubenswrapper[4688]: I0930 17:38:53.042149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerDied","Data":"cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded"} Sep 30 17:38:56 crc kubenswrapper[4688]: I0930 17:38:56.071988 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerStarted","Data":"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11"} Sep 30 17:38:56 crc kubenswrapper[4688]: I0930 17:38:56.091819 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qstbj" podStartSLOduration=3.112247385 podStartE2EDuration="7.091802914s" podCreationTimestamp="2025-09-30 17:38:49 +0000 UTC" firstStartedPulling="2025-09-30 17:38:51.0204307 +0000 UTC m=+1388.315868258" lastFinishedPulling="2025-09-30 17:38:54.999986229 +0000 UTC m=+1392.295423787" observedRunningTime="2025-09-30 17:38:56.091736633 +0000 UTC m=+1393.387174201" watchObservedRunningTime="2025-09-30 17:38:56.091802914 +0000 UTC m=+1393.387240472" Sep 30 17:39:00 crc kubenswrapper[4688]: I0930 17:39:00.136349 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:00 crc kubenswrapper[4688]: I0930 17:39:00.137925 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:00 crc kubenswrapper[4688]: I0930 17:39:00.191830 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:01 crc kubenswrapper[4688]: I0930 17:39:01.171601 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:01 crc kubenswrapper[4688]: I0930 17:39:01.221759 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.134709 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qstbj" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="registry-server" containerID="cri-o://56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11" gracePeriod=2 Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.606645 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.655431 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4bg4\" (UniqueName: \"kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4\") pod \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.655521 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content\") pod \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.655544 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities\") pod \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\" (UID: \"c9ce8482-7f27-4de1-bdad-5d032e5e9765\") " Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.656976 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities" (OuterVolumeSpecName: "utilities") pod "c9ce8482-7f27-4de1-bdad-5d032e5e9765" (UID: "c9ce8482-7f27-4de1-bdad-5d032e5e9765"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.662920 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4" (OuterVolumeSpecName: "kube-api-access-w4bg4") pod "c9ce8482-7f27-4de1-bdad-5d032e5e9765" (UID: "c9ce8482-7f27-4de1-bdad-5d032e5e9765"). InnerVolumeSpecName "kube-api-access-w4bg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.758758 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4bg4\" (UniqueName: \"kubernetes.io/projected/c9ce8482-7f27-4de1-bdad-5d032e5e9765-kube-api-access-w4bg4\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.758988 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.770635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9ce8482-7f27-4de1-bdad-5d032e5e9765" (UID: "c9ce8482-7f27-4de1-bdad-5d032e5e9765"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:39:03 crc kubenswrapper[4688]: I0930 17:39:03.861150 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ce8482-7f27-4de1-bdad-5d032e5e9765-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.147870 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qstbj" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.147874 4688 generic.go:334] "Generic (PLEG): container finished" podID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerID="56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11" exitCode=0 Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.148653 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerDied","Data":"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11"} Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.148697 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qstbj" event={"ID":"c9ce8482-7f27-4de1-bdad-5d032e5e9765","Type":"ContainerDied","Data":"132676a6fc472c21ea2c384b9bd019382a7b24de8fa7580450d59c6cc997eb38"} Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.148716 4688 scope.go:117] "RemoveContainer" containerID="56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.170647 4688 scope.go:117] "RemoveContainer" containerID="cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.186720 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.227269 4688 scope.go:117] "RemoveContainer" containerID="beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.228958 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qstbj"] Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.262216 4688 scope.go:117] "RemoveContainer" containerID="56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11" Sep 30 17:39:04 crc kubenswrapper[4688]: E0930 17:39:04.263419 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11\": container with ID starting with 56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11 not found: ID does not exist" containerID="56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.263465 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11"} err="failed to get container status \"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11\": rpc error: code = NotFound desc = could not find container \"56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11\": container with ID starting with 56bec879d11f337d04b5ac9bdc27fb4e2fdcf517a04f62004bc8885d512c7d11 not found: ID does not exist" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.263497 4688 scope.go:117] "RemoveContainer" containerID="cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded" Sep 30 17:39:04 crc kubenswrapper[4688]: E0930 17:39:04.264216 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded\": container with ID starting with cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded not found: ID does not exist" containerID="cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.264299 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded"} err="failed to get container status \"cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded\": rpc error: code = NotFound desc = could not find container \"cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded\": container with ID starting with cd31e86025fd7af2bf5459596197431c761b642f54afb95c37dd5e1d54d49ded not found: ID does not exist" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.264373 4688 scope.go:117] "RemoveContainer" containerID="beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9" Sep 30 17:39:04 crc kubenswrapper[4688]: E0930 17:39:04.265330 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9\": container with ID starting with beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9 not found: ID does not exist" containerID="beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9" Sep 30 17:39:04 crc kubenswrapper[4688]: I0930 17:39:04.265373 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9"} err="failed to get container status \"beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9\": rpc error: code = NotFound desc = could not find container \"beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9\": container with ID starting with beba80cf6952c48b179a8b9f65ffa7124acb3034067f640d1a018b63c7f624b9 not found: ID does not exist" Sep 30 17:39:05 crc kubenswrapper[4688]: I0930 17:39:05.443101 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" path="/var/lib/kubelet/pods/c9ce8482-7f27-4de1-bdad-5d032e5e9765/volumes" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.646125 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:38 crc kubenswrapper[4688]: E0930 17:39:38.647614 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="registry-server" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.647638 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="registry-server" Sep 30 17:39:38 crc kubenswrapper[4688]: E0930 17:39:38.647652 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="extract-utilities" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.647659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="extract-utilities" Sep 30 17:39:38 crc kubenswrapper[4688]: E0930 17:39:38.647674 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="extract-content" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.647684 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="extract-content" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.647974 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ce8482-7f27-4de1-bdad-5d032e5e9765" containerName="registry-server" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.650068 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.660059 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.751367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.751474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6bfr\" (UniqueName: \"kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.752746 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.854841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.855317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.855384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6bfr\" (UniqueName: \"kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.855458 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.855967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.881188 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6bfr\" (UniqueName: \"kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr\") pod \"certified-operators-c5nmf\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:38 crc kubenswrapper[4688]: I0930 17:39:38.985322 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:39 crc kubenswrapper[4688]: I0930 17:39:39.333392 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:39 crc kubenswrapper[4688]: W0930 17:39:39.355837 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d5c5a8_322c_49f2_8be8_2b868e38e99a.slice/crio-5379f745049c08c3b3e33629484995a186ecb0d972a0d118fc23c1c698f78fd9 WatchSource:0}: Error finding container 5379f745049c08c3b3e33629484995a186ecb0d972a0d118fc23c1c698f78fd9: Status 404 returned error can't find the container with id 5379f745049c08c3b3e33629484995a186ecb0d972a0d118fc23c1c698f78fd9 Sep 30 17:39:39 crc kubenswrapper[4688]: I0930 17:39:39.505378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerStarted","Data":"5379f745049c08c3b3e33629484995a186ecb0d972a0d118fc23c1c698f78fd9"} Sep 30 17:39:40 crc kubenswrapper[4688]: I0930 17:39:40.521288 4688 generic.go:334] "Generic (PLEG): container finished" podID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerID="853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9" exitCode=0 Sep 30 17:39:40 crc kubenswrapper[4688]: I0930 17:39:40.521413 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerDied","Data":"853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9"} Sep 30 17:39:42 crc kubenswrapper[4688]: I0930 17:39:42.548235 4688 generic.go:334] "Generic (PLEG): container finished" podID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerID="cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2" exitCode=0 Sep 30 17:39:42 crc kubenswrapper[4688]: I0930 17:39:42.548735 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerDied","Data":"cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2"} Sep 30 17:39:43 crc kubenswrapper[4688]: I0930 17:39:43.560796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerStarted","Data":"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d"} Sep 30 17:39:43 crc kubenswrapper[4688]: I0930 17:39:43.588831 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c5nmf" podStartSLOduration=3.124199723 podStartE2EDuration="5.588809426s" podCreationTimestamp="2025-09-30 17:39:38 +0000 UTC" firstStartedPulling="2025-09-30 17:39:40.524612486 +0000 UTC m=+1437.820050064" lastFinishedPulling="2025-09-30 17:39:42.989222209 +0000 UTC m=+1440.284659767" observedRunningTime="2025-09-30 17:39:43.582233356 +0000 UTC m=+1440.877670914" watchObservedRunningTime="2025-09-30 17:39:43.588809426 +0000 UTC m=+1440.884246984" Sep 30 17:39:46 crc kubenswrapper[4688]: I0930 17:39:46.358299 4688 scope.go:117] "RemoveContainer" containerID="1be03ea4fb3c14df395d5297792c422830943ccbd8436612a11a4a61f8845ebc" Sep 30 17:39:48 crc kubenswrapper[4688]: I0930 17:39:48.985493 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:48 crc kubenswrapper[4688]: I0930 17:39:48.986312 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:49 crc kubenswrapper[4688]: I0930 17:39:49.062319 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:49 crc kubenswrapper[4688]: I0930 17:39:49.680348 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:49 crc kubenswrapper[4688]: I0930 17:39:49.746244 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:51 crc kubenswrapper[4688]: I0930 17:39:51.644676 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c5nmf" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="registry-server" containerID="cri-o://ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d" gracePeriod=2 Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.117899 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.278307 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content\") pod \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.278952 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities\") pod \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.279758 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities" (OuterVolumeSpecName: "utilities") pod "40d5c5a8-322c-49f2-8be8-2b868e38e99a" (UID: "40d5c5a8-322c-49f2-8be8-2b868e38e99a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.279858 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6bfr\" (UniqueName: \"kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr\") pod \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\" (UID: \"40d5c5a8-322c-49f2-8be8-2b868e38e99a\") " Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.281286 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.298988 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr" (OuterVolumeSpecName: "kube-api-access-l6bfr") pod "40d5c5a8-322c-49f2-8be8-2b868e38e99a" (UID: "40d5c5a8-322c-49f2-8be8-2b868e38e99a"). InnerVolumeSpecName "kube-api-access-l6bfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.333130 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40d5c5a8-322c-49f2-8be8-2b868e38e99a" (UID: "40d5c5a8-322c-49f2-8be8-2b868e38e99a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.384016 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40d5c5a8-322c-49f2-8be8-2b868e38e99a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.384055 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6bfr\" (UniqueName: \"kubernetes.io/projected/40d5c5a8-322c-49f2-8be8-2b868e38e99a-kube-api-access-l6bfr\") on node \"crc\" DevicePath \"\"" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.563806 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.563916 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.661392 4688 generic.go:334] "Generic (PLEG): container finished" podID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerID="ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d" exitCode=0 Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.661441 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerDied","Data":"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d"} Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.661469 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c5nmf" event={"ID":"40d5c5a8-322c-49f2-8be8-2b868e38e99a","Type":"ContainerDied","Data":"5379f745049c08c3b3e33629484995a186ecb0d972a0d118fc23c1c698f78fd9"} Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.661468 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c5nmf" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.661484 4688 scope.go:117] "RemoveContainer" containerID="ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.689502 4688 scope.go:117] "RemoveContainer" containerID="cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.715184 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.726859 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c5nmf"] Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.729694 4688 scope.go:117] "RemoveContainer" containerID="853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.782118 4688 scope.go:117] "RemoveContainer" containerID="ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d" Sep 30 17:39:52 crc kubenswrapper[4688]: E0930 17:39:52.782966 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d\": container with ID starting with ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d not found: ID does not exist" containerID="ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.783009 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d"} err="failed to get container status \"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d\": rpc error: code = NotFound desc = could not find container \"ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d\": container with ID starting with ce9f172257910c7081053920925cb2c44c7f9c24c8a47c86f2649d4ea37cd06d not found: ID does not exist" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.783038 4688 scope.go:117] "RemoveContainer" containerID="cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2" Sep 30 17:39:52 crc kubenswrapper[4688]: E0930 17:39:52.783479 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2\": container with ID starting with cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2 not found: ID does not exist" containerID="cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.783503 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2"} err="failed to get container status \"cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2\": rpc error: code = NotFound desc = could not find container \"cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2\": container with ID starting with cbc9f940e2e364d84760aad53cfaf73e50fb09a54e2b8cfe811b8cacbf351ba2 not found: ID does not exist" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.783516 4688 scope.go:117] "RemoveContainer" containerID="853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9" Sep 30 17:39:52 crc kubenswrapper[4688]: E0930 17:39:52.784187 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9\": container with ID starting with 853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9 not found: ID does not exist" containerID="853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9" Sep 30 17:39:52 crc kubenswrapper[4688]: I0930 17:39:52.784209 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9"} err="failed to get container status \"853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9\": rpc error: code = NotFound desc = could not find container \"853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9\": container with ID starting with 853af1a1f1594ffca71f5d45b76292710e9fab19c759ab76c7e754579b6abab9 not found: ID does not exist" Sep 30 17:39:53 crc kubenswrapper[4688]: I0930 17:39:53.443944 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" path="/var/lib/kubelet/pods/40d5c5a8-322c-49f2-8be8-2b868e38e99a/volumes" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.497739 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:39:58 crc kubenswrapper[4688]: E0930 17:39:58.498814 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="registry-server" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.498834 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="registry-server" Sep 30 17:39:58 crc kubenswrapper[4688]: E0930 17:39:58.498875 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="extract-content" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.498883 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="extract-content" Sep 30 17:39:58 crc kubenswrapper[4688]: E0930 17:39:58.498900 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="extract-utilities" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.498910 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="extract-utilities" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.499167 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d5c5a8-322c-49f2-8be8-2b868e38e99a" containerName="registry-server" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.501119 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.530582 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.547300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.547844 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqgb\" (UniqueName: \"kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.549243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.651885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.652065 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.652122 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dqgb\" (UniqueName: \"kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.653253 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.653378 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.694503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dqgb\" (UniqueName: \"kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb\") pod \"community-operators-krbz2\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:58 crc kubenswrapper[4688]: I0930 17:39:58.876110 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:39:59 crc kubenswrapper[4688]: I0930 17:39:59.447453 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:39:59 crc kubenswrapper[4688]: I0930 17:39:59.750297 4688 generic.go:334] "Generic (PLEG): container finished" podID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerID="412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133" exitCode=0 Sep 30 17:39:59 crc kubenswrapper[4688]: I0930 17:39:59.750462 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerDied","Data":"412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133"} Sep 30 17:39:59 crc kubenswrapper[4688]: I0930 17:39:59.750580 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerStarted","Data":"18923c0ccc48294947418db78e8f038b322c11ed3b78cfbf616df1e77249740f"} Sep 30 17:40:00 crc kubenswrapper[4688]: I0930 17:40:00.764185 4688 generic.go:334] "Generic (PLEG): container finished" podID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerID="e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad" exitCode=0 Sep 30 17:40:00 crc kubenswrapper[4688]: I0930 17:40:00.764375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerDied","Data":"e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad"} Sep 30 17:40:01 crc kubenswrapper[4688]: I0930 17:40:01.777165 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerStarted","Data":"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d"} Sep 30 17:40:01 crc kubenswrapper[4688]: I0930 17:40:01.810857 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-krbz2" podStartSLOduration=2.126063272 podStartE2EDuration="3.810822194s" podCreationTimestamp="2025-09-30 17:39:58 +0000 UTC" firstStartedPulling="2025-09-30 17:39:59.75232484 +0000 UTC m=+1457.047762398" lastFinishedPulling="2025-09-30 17:40:01.437083762 +0000 UTC m=+1458.732521320" observedRunningTime="2025-09-30 17:40:01.806219105 +0000 UTC m=+1459.101656663" watchObservedRunningTime="2025-09-30 17:40:01.810822194 +0000 UTC m=+1459.106259762" Sep 30 17:40:03 crc kubenswrapper[4688]: I0930 17:40:03.373489 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:40:03 crc kubenswrapper[4688]: E0930 17:40:03.373778 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:40:03 crc kubenswrapper[4688]: E0930 17:40:03.373973 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:40:03 crc kubenswrapper[4688]: E0930 17:40:03.374034 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:42:05.374015158 +0000 UTC m=+1582.669452716 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:40:03 crc kubenswrapper[4688]: E0930 17:40:03.480168 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:40:03 crc kubenswrapper[4688]: I0930 17:40:03.800633 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:40:08 crc kubenswrapper[4688]: I0930 17:40:08.877071 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:08 crc kubenswrapper[4688]: I0930 17:40:08.877830 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:08 crc kubenswrapper[4688]: I0930 17:40:08.932715 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:09 crc kubenswrapper[4688]: I0930 17:40:09.938227 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:10 crc kubenswrapper[4688]: I0930 17:40:10.002034 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:40:11 crc kubenswrapper[4688]: I0930 17:40:11.899777 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-krbz2" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="registry-server" containerID="cri-o://2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d" gracePeriod=2 Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.410480 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.504091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content\") pod \"85198149-be80-4b68-814a-b73c5bf1eaf9\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.504756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dqgb\" (UniqueName: \"kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb\") pod \"85198149-be80-4b68-814a-b73c5bf1eaf9\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.504858 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities\") pod \"85198149-be80-4b68-814a-b73c5bf1eaf9\" (UID: \"85198149-be80-4b68-814a-b73c5bf1eaf9\") " Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.506010 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities" (OuterVolumeSpecName: "utilities") pod "85198149-be80-4b68-814a-b73c5bf1eaf9" (UID: "85198149-be80-4b68-814a-b73c5bf1eaf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.512804 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb" (OuterVolumeSpecName: "kube-api-access-2dqgb") pod "85198149-be80-4b68-814a-b73c5bf1eaf9" (UID: "85198149-be80-4b68-814a-b73c5bf1eaf9"). InnerVolumeSpecName "kube-api-access-2dqgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.570200 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85198149-be80-4b68-814a-b73c5bf1eaf9" (UID: "85198149-be80-4b68-814a-b73c5bf1eaf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.608589 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dqgb\" (UniqueName: \"kubernetes.io/projected/85198149-be80-4b68-814a-b73c5bf1eaf9-kube-api-access-2dqgb\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.608632 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.608646 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85198149-be80-4b68-814a-b73c5bf1eaf9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.914841 4688 generic.go:334] "Generic (PLEG): container finished" podID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerID="2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d" exitCode=0 Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.914905 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerDied","Data":"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d"} Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.914954 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krbz2" event={"ID":"85198149-be80-4b68-814a-b73c5bf1eaf9","Type":"ContainerDied","Data":"18923c0ccc48294947418db78e8f038b322c11ed3b78cfbf616df1e77249740f"} Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.914980 4688 scope.go:117] "RemoveContainer" containerID="2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.915018 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krbz2" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.957295 4688 scope.go:117] "RemoveContainer" containerID="e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad" Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.960834 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.969582 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-krbz2"] Sep 30 17:40:12 crc kubenswrapper[4688]: I0930 17:40:12.980393 4688 scope.go:117] "RemoveContainer" containerID="412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.027958 4688 scope.go:117] "RemoveContainer" containerID="2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d" Sep 30 17:40:13 crc kubenswrapper[4688]: E0930 17:40:13.029051 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d\": container with ID starting with 2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d not found: ID does not exist" containerID="2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.029114 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d"} err="failed to get container status \"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d\": rpc error: code = NotFound desc = could not find container \"2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d\": container with ID starting with 2f61a03c663d8845e64dc0219798f3714487a3eb736811f87fc4c55270e30e8d not found: ID does not exist" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.029148 4688 scope.go:117] "RemoveContainer" containerID="e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad" Sep 30 17:40:13 crc kubenswrapper[4688]: E0930 17:40:13.029920 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad\": container with ID starting with e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad not found: ID does not exist" containerID="e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.029966 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad"} err="failed to get container status \"e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad\": rpc error: code = NotFound desc = could not find container \"e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad\": container with ID starting with e5169e920beb17c5e5e28684a971b06e74fa5bbe7e8dc1185a2eba5cedd9e9ad not found: ID does not exist" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.029999 4688 scope.go:117] "RemoveContainer" containerID="412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133" Sep 30 17:40:13 crc kubenswrapper[4688]: E0930 17:40:13.030458 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133\": container with ID starting with 412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133 not found: ID does not exist" containerID="412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.030505 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133"} err="failed to get container status \"412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133\": rpc error: code = NotFound desc = could not find container \"412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133\": container with ID starting with 412f2a3773b0f31b02d4ee2eba53d82db57238d4ce75906077a9b1aa717fa133 not found: ID does not exist" Sep 30 17:40:13 crc kubenswrapper[4688]: I0930 17:40:13.442445 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" path="/var/lib/kubelet/pods/85198149-be80-4b68-814a-b73c5bf1eaf9/volumes" Sep 30 17:40:22 crc kubenswrapper[4688]: I0930 17:40:22.546349 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:40:22 crc kubenswrapper[4688]: I0930 17:40:22.547373 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.501845 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:27 crc kubenswrapper[4688]: E0930 17:40:27.503672 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="registry-server" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.503699 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="registry-server" Sep 30 17:40:27 crc kubenswrapper[4688]: E0930 17:40:27.503766 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="extract-utilities" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.503778 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="extract-utilities" Sep 30 17:40:27 crc kubenswrapper[4688]: E0930 17:40:27.503800 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="extract-content" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.503811 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="extract-content" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.504178 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="85198149-be80-4b68-814a-b73c5bf1eaf9" containerName="registry-server" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.506565 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.520430 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.584951 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.585015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.585831 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbd97\" (UniqueName: \"kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.688083 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbd97\" (UniqueName: \"kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.688190 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.688222 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.689138 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.689164 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.715635 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbd97\" (UniqueName: \"kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97\") pod \"redhat-marketplace-6hlzs\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:27 crc kubenswrapper[4688]: I0930 17:40:27.843105 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:28 crc kubenswrapper[4688]: I0930 17:40:28.342940 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:29 crc kubenswrapper[4688]: I0930 17:40:29.097230 4688 generic.go:334] "Generic (PLEG): container finished" podID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerID="482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880" exitCode=0 Sep 30 17:40:29 crc kubenswrapper[4688]: I0930 17:40:29.097365 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerDied","Data":"482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880"} Sep 30 17:40:29 crc kubenswrapper[4688]: I0930 17:40:29.097735 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerStarted","Data":"11517d5c3b00bee1be10c7b3f4cae044399102dfb30f9b4cd079ebe78d514ebe"} Sep 30 17:40:31 crc kubenswrapper[4688]: I0930 17:40:31.125239 4688 generic.go:334] "Generic (PLEG): container finished" podID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerID="f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c" exitCode=0 Sep 30 17:40:31 crc kubenswrapper[4688]: I0930 17:40:31.125305 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerDied","Data":"f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c"} Sep 30 17:40:31 crc kubenswrapper[4688]: E0930 17:40:31.786391 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" Sep 30 17:40:32 crc kubenswrapper[4688]: I0930 17:40:32.153342 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:40:32 crc kubenswrapper[4688]: I0930 17:40:32.153333 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerStarted","Data":"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9"} Sep 30 17:40:32 crc kubenswrapper[4688]: I0930 17:40:32.180560 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6hlzs" podStartSLOduration=2.477464493 podStartE2EDuration="5.180529279s" podCreationTimestamp="2025-09-30 17:40:27 +0000 UTC" firstStartedPulling="2025-09-30 17:40:29.100093297 +0000 UTC m=+1486.395530855" lastFinishedPulling="2025-09-30 17:40:31.803158073 +0000 UTC m=+1489.098595641" observedRunningTime="2025-09-30 17:40:32.178465885 +0000 UTC m=+1489.473903483" watchObservedRunningTime="2025-09-30 17:40:32.180529279 +0000 UTC m=+1489.475966837" Sep 30 17:40:34 crc kubenswrapper[4688]: I0930 17:40:34.565212 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:40:34 crc kubenswrapper[4688]: E0930 17:40:34.565526 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:40:34 crc kubenswrapper[4688]: E0930 17:40:34.565768 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:40:34 crc kubenswrapper[4688]: E0930 17:40:34.565898 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:42:36.565863516 +0000 UTC m=+1613.861301114 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:40:37 crc kubenswrapper[4688]: I0930 17:40:37.843373 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:37 crc kubenswrapper[4688]: I0930 17:40:37.844095 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:37 crc kubenswrapper[4688]: I0930 17:40:37.894156 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:38 crc kubenswrapper[4688]: I0930 17:40:38.265443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:38 crc kubenswrapper[4688]: I0930 17:40:38.324403 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.241933 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6hlzs" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="registry-server" containerID="cri-o://ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9" gracePeriod=2 Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.770077 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.924548 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content\") pod \"d763c768-7da3-4c5b-8ae5-9d002b53b580\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.924717 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities\") pod \"d763c768-7da3-4c5b-8ae5-9d002b53b580\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.924879 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbd97\" (UniqueName: \"kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97\") pod \"d763c768-7da3-4c5b-8ae5-9d002b53b580\" (UID: \"d763c768-7da3-4c5b-8ae5-9d002b53b580\") " Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.926144 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities" (OuterVolumeSpecName: "utilities") pod "d763c768-7da3-4c5b-8ae5-9d002b53b580" (UID: "d763c768-7da3-4c5b-8ae5-9d002b53b580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.934940 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97" (OuterVolumeSpecName: "kube-api-access-gbd97") pod "d763c768-7da3-4c5b-8ae5-9d002b53b580" (UID: "d763c768-7da3-4c5b-8ae5-9d002b53b580"). InnerVolumeSpecName "kube-api-access-gbd97". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:40:40 crc kubenswrapper[4688]: I0930 17:40:40.942329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d763c768-7da3-4c5b-8ae5-9d002b53b580" (UID: "d763c768-7da3-4c5b-8ae5-9d002b53b580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.027589 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.027648 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d763c768-7da3-4c5b-8ae5-9d002b53b580-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.027662 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbd97\" (UniqueName: \"kubernetes.io/projected/d763c768-7da3-4c5b-8ae5-9d002b53b580-kube-api-access-gbd97\") on node \"crc\" DevicePath \"\"" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.254928 4688 generic.go:334] "Generic (PLEG): container finished" podID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerID="ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9" exitCode=0 Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.254988 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerDied","Data":"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9"} Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.255024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlzs" event={"ID":"d763c768-7da3-4c5b-8ae5-9d002b53b580","Type":"ContainerDied","Data":"11517d5c3b00bee1be10c7b3f4cae044399102dfb30f9b4cd079ebe78d514ebe"} Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.255049 4688 scope.go:117] "RemoveContainer" containerID="ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.255064 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlzs" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.289180 4688 scope.go:117] "RemoveContainer" containerID="f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.301733 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.315806 4688 scope.go:117] "RemoveContainer" containerID="482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.318264 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlzs"] Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.356296 4688 scope.go:117] "RemoveContainer" containerID="ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9" Sep 30 17:40:41 crc kubenswrapper[4688]: E0930 17:40:41.357088 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9\": container with ID starting with ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9 not found: ID does not exist" containerID="ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.357157 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9"} err="failed to get container status \"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9\": rpc error: code = NotFound desc = could not find container \"ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9\": container with ID starting with ebff1c5888d38670fb56623031f54290c9608a81737f434ba51ea29b6ed75ac9 not found: ID does not exist" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.357203 4688 scope.go:117] "RemoveContainer" containerID="f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c" Sep 30 17:40:41 crc kubenswrapper[4688]: E0930 17:40:41.358142 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c\": container with ID starting with f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c not found: ID does not exist" containerID="f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.358188 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c"} err="failed to get container status \"f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c\": rpc error: code = NotFound desc = could not find container \"f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c\": container with ID starting with f93f73ebd05a3be5207e27d1184c188bbd27cfac6c0bcb109913f370bece762c not found: ID does not exist" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.358206 4688 scope.go:117] "RemoveContainer" containerID="482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880" Sep 30 17:40:41 crc kubenswrapper[4688]: E0930 17:40:41.358665 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880\": container with ID starting with 482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880 not found: ID does not exist" containerID="482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.358720 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880"} err="failed to get container status \"482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880\": rpc error: code = NotFound desc = could not find container \"482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880\": container with ID starting with 482744096db31262a7daeeba69a02335d2ee53ffd35b5887b535853aa778d880 not found: ID does not exist" Sep 30 17:40:41 crc kubenswrapper[4688]: I0930 17:40:41.444400 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" path="/var/lib/kubelet/pods/d763c768-7da3-4c5b-8ae5-9d002b53b580/volumes" Sep 30 17:40:52 crc kubenswrapper[4688]: I0930 17:40:52.546361 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:40:52 crc kubenswrapper[4688]: I0930 17:40:52.547804 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:40:52 crc kubenswrapper[4688]: I0930 17:40:52.547891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:40:52 crc kubenswrapper[4688]: I0930 17:40:52.549196 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:40:52 crc kubenswrapper[4688]: I0930 17:40:52.549310 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" gracePeriod=600 Sep 30 17:40:52 crc kubenswrapper[4688]: E0930 17:40:52.687503 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:40:53 crc kubenswrapper[4688]: I0930 17:40:53.419831 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" exitCode=0 Sep 30 17:40:53 crc kubenswrapper[4688]: I0930 17:40:53.419905 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0"} Sep 30 17:40:53 crc kubenswrapper[4688]: I0930 17:40:53.420282 4688 scope.go:117] "RemoveContainer" containerID="0c732061131626f7b85d7072fd61b6333264160dfe91ce2f932946d1355bafbc" Sep 30 17:40:53 crc kubenswrapper[4688]: I0930 17:40:53.421074 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:40:53 crc kubenswrapper[4688]: E0930 17:40:53.421394 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:05 crc kubenswrapper[4688]: I0930 17:41:05.431965 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:41:05 crc kubenswrapper[4688]: E0930 17:41:05.433154 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:19 crc kubenswrapper[4688]: I0930 17:41:19.430302 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:41:19 crc kubenswrapper[4688]: E0930 17:41:19.431415 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:31 crc kubenswrapper[4688]: I0930 17:41:31.430254 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:41:31 crc kubenswrapper[4688]: E0930 17:41:31.431523 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:42 crc kubenswrapper[4688]: I0930 17:41:42.429560 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:41:42 crc kubenswrapper[4688]: E0930 17:41:42.430708 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:55 crc kubenswrapper[4688]: I0930 17:41:55.429984 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:41:55 crc kubenswrapper[4688]: E0930 17:41:55.432086 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.058259 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8kn9n"] Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.072410 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hw9h7"] Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.083199 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hw9h7"] Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.091846 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8kn9n"] Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.446109 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a6b9bb-4d9c-4166-8d08-58d7428a1d0e" path="/var/lib/kubelet/pods/52a6b9bb-4d9c-4166-8d08-58d7428a1d0e/volumes" Sep 30 17:41:59 crc kubenswrapper[4688]: I0930 17:41:59.446804 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907d92af-aa64-4815-94d7-f3afd616bdd1" path="/var/lib/kubelet/pods/907d92af-aa64-4815-94d7-f3afd616bdd1/volumes" Sep 30 17:42:05 crc kubenswrapper[4688]: I0930 17:42:05.447885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:42:05 crc kubenswrapper[4688]: E0930 17:42:05.448198 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:42:05 crc kubenswrapper[4688]: E0930 17:42:05.448788 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:42:05 crc kubenswrapper[4688]: E0930 17:42:05.448900 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:44:07.44887027 +0000 UTC m=+1704.744307868 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:42:06 crc kubenswrapper[4688]: E0930 17:42:06.802848 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:42:07 crc kubenswrapper[4688]: I0930 17:42:07.251176 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:42:09 crc kubenswrapper[4688]: I0930 17:42:09.430872 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:42:09 crc kubenswrapper[4688]: E0930 17:42:09.431896 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.062940 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-05f7-account-create-fj4md"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.074206 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6f1e-account-create-47ljs"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.084187 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bbn9q"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.094665 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-05f7-account-create-fj4md"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.111961 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6f1e-account-create-47ljs"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.123411 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bbn9q"] Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.444269 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780ba5d7-2d07-4268-8899-bb4ccfedb41f" path="/var/lib/kubelet/pods/780ba5d7-2d07-4268-8899-bb4ccfedb41f/volumes" Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.445377 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b944aae0-dfff-452a-8618-1ca22a36ae3a" path="/var/lib/kubelet/pods/b944aae0-dfff-452a-8618-1ca22a36ae3a/volumes" Sep 30 17:42:11 crc kubenswrapper[4688]: I0930 17:42:11.445938 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6fa854-2454-4d36-a070-1315f0d36b09" path="/var/lib/kubelet/pods/ba6fa854-2454-4d36-a070-1315f0d36b09/volumes" Sep 30 17:42:23 crc kubenswrapper[4688]: I0930 17:42:23.430182 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:42:23 crc kubenswrapper[4688]: E0930 17:42:23.431985 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.046707 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7qnp8"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.059724 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e9f6-account-create-d4bqm"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.072158 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9x9pr"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.082021 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7qnp8"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.090003 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nwcft"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.099206 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9x9pr"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.105804 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nwcft"] Sep 30 17:42:34 crc kubenswrapper[4688]: I0930 17:42:34.113823 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e9f6-account-create-d4bqm"] Sep 30 17:42:35 crc kubenswrapper[4688]: E0930 17:42:35.155186 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.429741 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:42:35 crc kubenswrapper[4688]: E0930 17:42:35.430095 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.440713 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18659cd0-889f-42a1-8c6a-297f6fd0cd68" path="/var/lib/kubelet/pods/18659cd0-889f-42a1-8c6a-297f6fd0cd68/volumes" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.441417 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c56fdac-3acd-4d5f-b702-e25591b6112e" path="/var/lib/kubelet/pods/1c56fdac-3acd-4d5f-b702-e25591b6112e/volumes" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.441961 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ffd3d0-11ce-4a79-acb8-540431aa7db1" path="/var/lib/kubelet/pods/50ffd3d0-11ce-4a79-acb8-540431aa7db1/volumes" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.442488 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a1e2c1-3d27-4e4a-810d-7c0385457374" path="/var/lib/kubelet/pods/59a1e2c1-3d27-4e4a-810d-7c0385457374/volumes" Sep 30 17:42:35 crc kubenswrapper[4688]: I0930 17:42:35.587433 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:42:36 crc kubenswrapper[4688]: I0930 17:42:36.599838 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:42:36 crc kubenswrapper[4688]: E0930 17:42:36.600006 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:42:36 crc kubenswrapper[4688]: E0930 17:42:36.600183 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:42:36 crc kubenswrapper[4688]: E0930 17:42:36.600240 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:44:38.600218925 +0000 UTC m=+1735.895656483 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:42:42 crc kubenswrapper[4688]: I0930 17:42:42.042190 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7rrsk"] Sep 30 17:42:42 crc kubenswrapper[4688]: I0930 17:42:42.051274 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7rrsk"] Sep 30 17:42:43 crc kubenswrapper[4688]: I0930 17:42:43.444817 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c69c99-2a0c-4f87-85c6-557b4e133567" path="/var/lib/kubelet/pods/41c69c99-2a0c-4f87-85c6-557b4e133567/volumes" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.564429 4688 scope.go:117] "RemoveContainer" containerID="aebc8f2a37807f3ffd5241f969ab7aa8a5e0130bc59c1133b7928a670a6c94b9" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.596599 4688 scope.go:117] "RemoveContainer" containerID="26a976966efd9ee04f54218a5e581a3e73e685535e90debb6decd459dac6fe36" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.640068 4688 scope.go:117] "RemoveContainer" containerID="ad10fccadb31071187f7d46048bbba7baed415c5797e476575aac63e502b7b8f" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.686777 4688 scope.go:117] "RemoveContainer" containerID="c971d11a5e59ce9361e970b498650bbddbf435e3189eb87cc0ae1ed80a597e85" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.725257 4688 scope.go:117] "RemoveContainer" containerID="b49e537dea620b31736e535f1e17f5fe94908b3176425818df0d7386959d5c56" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.750806 4688 scope.go:117] "RemoveContainer" containerID="8c06aa6b1bad2e7a3012189a0f0094508dfea57854fa169130f139db6f658480" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.771915 4688 scope.go:117] "RemoveContainer" containerID="272e5aba80cc2cc4cc59a5fef26ded3fbd7cc0f3ba9f1c386b1c89b50559908b" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.835691 4688 scope.go:117] "RemoveContainer" containerID="35ff587d4488113a19765dffd380458bf2d1c3247004196e15fdc90fb0eec650" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.862330 4688 scope.go:117] "RemoveContainer" containerID="f7ddba0df00dd0d61b9a410e6930a1f7c1413adf8b6d56a7c0a4f536e777e268" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.894347 4688 scope.go:117] "RemoveContainer" containerID="6d772c022d00a39b6e2a0eadd7e9a805c2b360c22af455dea013c90ed2613980" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.914014 4688 scope.go:117] "RemoveContainer" containerID="a04242423727dfae0acb0628b67a8c724e41e23a4bdc74489b4ff074306e7bb7" Sep 30 17:42:46 crc kubenswrapper[4688]: I0930 17:42:46.936547 4688 scope.go:117] "RemoveContainer" containerID="840beba06d7b111f3d3ba2cbb4c89f8a333856abd6a564e78c4bad157e3bd4bf" Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.144250 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4ebc-account-create-vw6b5"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.154830 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fd3a-account-create-fhg2v"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.165527 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4ebc-account-create-vw6b5"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.173498 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-583c-account-create-qmjqp"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.180329 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fd3a-account-create-fhg2v"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.189330 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-583c-account-create-qmjqp"] Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.441040 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257f987c-0c66-44cf-9d24-7e9ca1855086" path="/var/lib/kubelet/pods/257f987c-0c66-44cf-9d24-7e9ca1855086/volumes" Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.441650 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48054514-6a95-4e97-be70-14945f8f0896" path="/var/lib/kubelet/pods/48054514-6a95-4e97-be70-14945f8f0896/volumes" Sep 30 17:42:49 crc kubenswrapper[4688]: I0930 17:42:49.442183 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd9b069-6115-4396-bf19-d5567a1ee8b9" path="/var/lib/kubelet/pods/fcd9b069-6115-4396-bf19-d5567a1ee8b9/volumes" Sep 30 17:42:50 crc kubenswrapper[4688]: I0930 17:42:50.043244 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f4dwg"] Sep 30 17:42:50 crc kubenswrapper[4688]: I0930 17:42:50.053122 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f4dwg"] Sep 30 17:42:50 crc kubenswrapper[4688]: I0930 17:42:50.430226 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:42:50 crc kubenswrapper[4688]: E0930 17:42:50.430763 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:42:51 crc kubenswrapper[4688]: I0930 17:42:51.449276 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f" path="/var/lib/kubelet/pods/6df0f6a0-3589-4f52-a7e8-8bf54ad1c09f/volumes" Sep 30 17:43:02 crc kubenswrapper[4688]: I0930 17:43:02.429562 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:43:02 crc kubenswrapper[4688]: E0930 17:43:02.430765 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:43:17 crc kubenswrapper[4688]: I0930 17:43:17.429866 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:43:17 crc kubenswrapper[4688]: E0930 17:43:17.430870 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:43:30 crc kubenswrapper[4688]: I0930 17:43:30.429956 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:43:30 crc kubenswrapper[4688]: E0930 17:43:30.431001 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:43:36 crc kubenswrapper[4688]: I0930 17:43:36.054607 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ptk4z"] Sep 30 17:43:36 crc kubenswrapper[4688]: I0930 17:43:36.073053 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ptk4z"] Sep 30 17:43:37 crc kubenswrapper[4688]: I0930 17:43:37.467801 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8979b0-26c0-428a-bc3e-4ee1d5ea066a" path="/var/lib/kubelet/pods/fc8979b0-26c0-428a-bc3e-4ee1d5ea066a/volumes" Sep 30 17:43:38 crc kubenswrapper[4688]: I0930 17:43:38.036542 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kd4xf"] Sep 30 17:43:38 crc kubenswrapper[4688]: I0930 17:43:38.046165 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kd4xf"] Sep 30 17:43:39 crc kubenswrapper[4688]: I0930 17:43:39.035829 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7f8lt"] Sep 30 17:43:39 crc kubenswrapper[4688]: I0930 17:43:39.049383 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7f8lt"] Sep 30 17:43:39 crc kubenswrapper[4688]: I0930 17:43:39.449091 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22982660-e8a2-4e49-9b5d-ebc252fd1c8e" path="/var/lib/kubelet/pods/22982660-e8a2-4e49-9b5d-ebc252fd1c8e/volumes" Sep 30 17:43:39 crc kubenswrapper[4688]: I0930 17:43:39.450380 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e0dade-3e8d-4c68-b992-e672f563e545" path="/var/lib/kubelet/pods/74e0dade-3e8d-4c68-b992-e672f563e545/volumes" Sep 30 17:43:41 crc kubenswrapper[4688]: I0930 17:43:41.429608 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:43:41 crc kubenswrapper[4688]: E0930 17:43:41.430258 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.151728 4688 scope.go:117] "RemoveContainer" containerID="090587b96d5ef62200849cd8b892601a761cb9b887bc68f789f3886110d2e3a0" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.196221 4688 scope.go:117] "RemoveContainer" containerID="c83776b1ae18e3a441c563245fa153fe3b8490d430fb5d5cacefc495419f2d87" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.240799 4688 scope.go:117] "RemoveContainer" containerID="0c8d05f724e462c6d7719cedfae78f7036f1f5e151aca53dbb627e947f926512" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.302817 4688 scope.go:117] "RemoveContainer" containerID="b5746ffe34dd04272d060188c687ada301704724a0b355fcc9a5ce5be5ead2cc" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.349005 4688 scope.go:117] "RemoveContainer" containerID="166f8f899bb368511d02c4ed34d4811679b37e0c41a24ad7df4c15d0fe43ed42" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.381819 4688 scope.go:117] "RemoveContainer" containerID="ec55c22ff30029c7818869185b6032b0c0831159ac305704af975aef49f18fbe" Sep 30 17:43:47 crc kubenswrapper[4688]: I0930 17:43:47.421851 4688 scope.go:117] "RemoveContainer" containerID="cd1c90e114797797574e94c4068be898222df2893b6947d587273eac37169a98" Sep 30 17:43:56 crc kubenswrapper[4688]: I0930 17:43:56.429682 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:43:56 crc kubenswrapper[4688]: E0930 17:43:56.430959 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:44:01 crc kubenswrapper[4688]: I0930 17:44:01.081111 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fxx6x"] Sep 30 17:44:01 crc kubenswrapper[4688]: I0930 17:44:01.090098 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fxx6x"] Sep 30 17:44:01 crc kubenswrapper[4688]: I0930 17:44:01.450692 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c36448-a984-423c-a9b6-6c4d210ca4b9" path="/var/lib/kubelet/pods/d1c36448-a984-423c-a9b6-6c4d210ca4b9/volumes" Sep 30 17:44:02 crc kubenswrapper[4688]: I0930 17:44:02.038411 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-c6cps"] Sep 30 17:44:02 crc kubenswrapper[4688]: I0930 17:44:02.049380 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-c6cps"] Sep 30 17:44:03 crc kubenswrapper[4688]: I0930 17:44:03.442877 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc61503-23f4-4eb4-b3fa-e1825b84881f" path="/var/lib/kubelet/pods/8fc61503-23f4-4eb4-b3fa-e1825b84881f/volumes" Sep 30 17:44:07 crc kubenswrapper[4688]: I0930 17:44:07.500997 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:44:07 crc kubenswrapper[4688]: E0930 17:44:07.501146 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:44:07 crc kubenswrapper[4688]: E0930 17:44:07.501728 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:44:07 crc kubenswrapper[4688]: E0930 17:44:07.501781 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:46:09.50176079 +0000 UTC m=+1826.797198348 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:44:09 crc kubenswrapper[4688]: I0930 17:44:09.434374 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:44:09 crc kubenswrapper[4688]: E0930 17:44:09.440890 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:44:10 crc kubenswrapper[4688]: E0930 17:44:10.253857 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:44:10 crc kubenswrapper[4688]: I0930 17:44:10.620872 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:44:24 crc kubenswrapper[4688]: I0930 17:44:24.429354 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:44:24 crc kubenswrapper[4688]: E0930 17:44:24.431103 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:44:38 crc kubenswrapper[4688]: I0930 17:44:38.430229 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:44:38 crc kubenswrapper[4688]: E0930 17:44:38.430935 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:44:38 crc kubenswrapper[4688]: E0930 17:44:38.588731 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" Sep 30 17:44:38 crc kubenswrapper[4688]: I0930 17:44:38.645289 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:44:38 crc kubenswrapper[4688]: E0930 17:44:38.645546 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:44:38 crc kubenswrapper[4688]: E0930 17:44:38.645563 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-647757d8c5-6pzkp: configmap "swift-ring-files" not found Sep 30 17:44:38 crc kubenswrapper[4688]: E0930 17:44:38.645634 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift podName:62277faa-c7c7-4151-b35f-f3ebe78d3ffe nodeName:}" failed. No retries permitted until 2025-09-30 17:46:40.645616068 +0000 UTC m=+1857.941053626 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift") pod "swift-proxy-647757d8c5-6pzkp" (UID: "62277faa-c7c7-4151-b35f-f3ebe78d3ffe") : configmap "swift-ring-files" not found Sep 30 17:44:38 crc kubenswrapper[4688]: I0930 17:44:38.915490 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.064958 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d4wgr"] Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.078034 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-wp4x8"] Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.093079 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gx862"] Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.103608 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d4wgr"] Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.121605 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-wp4x8"] Sep 30 17:44:40 crc kubenswrapper[4688]: I0930 17:44:40.130832 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gx862"] Sep 30 17:44:41 crc kubenswrapper[4688]: I0930 17:44:41.439601 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644e9c91-83cf-485d-bddb-700b3d5473e0" path="/var/lib/kubelet/pods/644e9c91-83cf-485d-bddb-700b3d5473e0/volumes" Sep 30 17:44:41 crc kubenswrapper[4688]: I0930 17:44:41.440250 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa912d04-6534-42ec-9492-db47f7ac0285" path="/var/lib/kubelet/pods/aa912d04-6534-42ec-9492-db47f7ac0285/volumes" Sep 30 17:44:41 crc kubenswrapper[4688]: I0930 17:44:41.440775 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b5e55d-c196-48cd-9ea7-c357ef7a693a" path="/var/lib/kubelet/pods/d4b5e55d-c196-48cd-9ea7-c357ef7a693a/volumes" Sep 30 17:44:47 crc kubenswrapper[4688]: I0930 17:44:47.551605 4688 scope.go:117] "RemoveContainer" containerID="7b17f1e32e5232e4129b198b920f8420ef7d12d4ab165c21c9aaee70c321b01b" Sep 30 17:44:47 crc kubenswrapper[4688]: I0930 17:44:47.588223 4688 scope.go:117] "RemoveContainer" containerID="4f5e0cd6e7f3dba6e718798e89abcd76bd080a628d6dd15364099bbc9e1bd865" Sep 30 17:44:47 crc kubenswrapper[4688]: I0930 17:44:47.656835 4688 scope.go:117] "RemoveContainer" containerID="cc8625a8863c3f0ede2fdfa14ef9276992da2490d4956c63885946b1e05da68a" Sep 30 17:44:47 crc kubenswrapper[4688]: I0930 17:44:47.702262 4688 scope.go:117] "RemoveContainer" containerID="df10db116da77b54eab5bdfa6b8fb53455b8c758f859bc8577266cff7e3e6001" Sep 30 17:44:47 crc kubenswrapper[4688]: I0930 17:44:47.730063 4688 scope.go:117] "RemoveContainer" containerID="cfd7b33eae57eeb85f8259223958cd92c821949ba5867a79789f7e3a1cb9b33b" Sep 30 17:44:50 crc kubenswrapper[4688]: I0930 17:44:50.429922 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:44:50 crc kubenswrapper[4688]: E0930 17:44:50.430643 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.040795 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fde3-account-create-r75vz"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.050710 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ab9-account-create-bphg2"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.061186 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8d22-account-create-xdvrr"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.070942 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fde3-account-create-r75vz"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.078983 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4ab9-account-create-bphg2"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.086282 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8d22-account-create-xdvrr"] Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.442658 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de52239-675f-4b7b-bac5-794f9493a77f" path="/var/lib/kubelet/pods/1de52239-675f-4b7b-bac5-794f9493a77f/volumes" Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.443324 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32279c8a-a972-4b12-ab61-206e2606a3dc" path="/var/lib/kubelet/pods/32279c8a-a972-4b12-ab61-206e2606a3dc/volumes" Sep 30 17:44:59 crc kubenswrapper[4688]: I0930 17:44:59.443875 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc67f708-6f4c-4465-acae-c723b8c054a9" path="/var/lib/kubelet/pods/fc67f708-6f4c-4465-acae-c723b8c054a9/volumes" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.177507 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6"] Sep 30 17:45:00 crc kubenswrapper[4688]: E0930 17:45:00.179079 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="extract-utilities" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.179106 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="extract-utilities" Sep 30 17:45:00 crc kubenswrapper[4688]: E0930 17:45:00.179146 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.179155 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4688]: E0930 17:45:00.179192 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="extract-content" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.179200 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="extract-content" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.180025 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d763c768-7da3-4c5b-8ae5-9d002b53b580" containerName="registry-server" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.181322 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.186054 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.187445 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.210488 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6"] Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.336752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.337261 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.337489 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gqf\" (UniqueName: \"kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.440311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gqf\" (UniqueName: \"kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.440845 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.440989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.442048 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.449656 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.469606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gqf\" (UniqueName: \"kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf\") pod \"collect-profiles-29320905-rwht6\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:00 crc kubenswrapper[4688]: I0930 17:45:00.541034 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:01 crc kubenswrapper[4688]: I0930 17:45:01.063108 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6"] Sep 30 17:45:01 crc kubenswrapper[4688]: I0930 17:45:01.136699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" event={"ID":"39961bba-e5a0-4319-b834-b40db142e8a8","Type":"ContainerStarted","Data":"5955e19ae9413ee337a41a67de91eaf2dcea6d3ea94e325c90c07c7b6287e742"} Sep 30 17:45:02 crc kubenswrapper[4688]: I0930 17:45:02.149973 4688 generic.go:334] "Generic (PLEG): container finished" podID="39961bba-e5a0-4319-b834-b40db142e8a8" containerID="b497d6bb8c0467de0b589534e58ec72ecc942f873b161d0a796fc1f7fc71b3d1" exitCode=0 Sep 30 17:45:02 crc kubenswrapper[4688]: I0930 17:45:02.150046 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" event={"ID":"39961bba-e5a0-4319-b834-b40db142e8a8","Type":"ContainerDied","Data":"b497d6bb8c0467de0b589534e58ec72ecc942f873b161d0a796fc1f7fc71b3d1"} Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.507859 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.616803 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98gqf\" (UniqueName: \"kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf\") pod \"39961bba-e5a0-4319-b834-b40db142e8a8\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.616917 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume\") pod \"39961bba-e5a0-4319-b834-b40db142e8a8\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.617011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume\") pod \"39961bba-e5a0-4319-b834-b40db142e8a8\" (UID: \"39961bba-e5a0-4319-b834-b40db142e8a8\") " Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.618436 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume" (OuterVolumeSpecName: "config-volume") pod "39961bba-e5a0-4319-b834-b40db142e8a8" (UID: "39961bba-e5a0-4319-b834-b40db142e8a8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.625675 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39961bba-e5a0-4319-b834-b40db142e8a8" (UID: "39961bba-e5a0-4319-b834-b40db142e8a8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.626340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf" (OuterVolumeSpecName: "kube-api-access-98gqf") pod "39961bba-e5a0-4319-b834-b40db142e8a8" (UID: "39961bba-e5a0-4319-b834-b40db142e8a8"). InnerVolumeSpecName "kube-api-access-98gqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.720086 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98gqf\" (UniqueName: \"kubernetes.io/projected/39961bba-e5a0-4319-b834-b40db142e8a8-kube-api-access-98gqf\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.720134 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39961bba-e5a0-4319-b834-b40db142e8a8-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:03 crc kubenswrapper[4688]: I0930 17:45:03.720149 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39961bba-e5a0-4319-b834-b40db142e8a8-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 17:45:04 crc kubenswrapper[4688]: I0930 17:45:04.170726 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" event={"ID":"39961bba-e5a0-4319-b834-b40db142e8a8","Type":"ContainerDied","Data":"5955e19ae9413ee337a41a67de91eaf2dcea6d3ea94e325c90c07c7b6287e742"} Sep 30 17:45:04 crc kubenswrapper[4688]: I0930 17:45:04.170799 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5955e19ae9413ee337a41a67de91eaf2dcea6d3ea94e325c90c07c7b6287e742" Sep 30 17:45:04 crc kubenswrapper[4688]: I0930 17:45:04.170895 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6" Sep 30 17:45:05 crc kubenswrapper[4688]: I0930 17:45:05.430282 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:45:05 crc kubenswrapper[4688]: E0930 17:45:05.431121 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:45:17 crc kubenswrapper[4688]: I0930 17:45:17.431764 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:45:17 crc kubenswrapper[4688]: E0930 17:45:17.433121 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:45:27 crc kubenswrapper[4688]: I0930 17:45:27.052392 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfglj"] Sep 30 17:45:27 crc kubenswrapper[4688]: I0930 17:45:27.062450 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rfglj"] Sep 30 17:45:27 crc kubenswrapper[4688]: I0930 17:45:27.457765 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931d7b6d-de48-4402-ac4e-70fcfa27004a" path="/var/lib/kubelet/pods/931d7b6d-de48-4402-ac4e-70fcfa27004a/volumes" Sep 30 17:45:28 crc kubenswrapper[4688]: I0930 17:45:28.429721 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:45:28 crc kubenswrapper[4688]: E0930 17:45:28.430186 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:45:42 crc kubenswrapper[4688]: I0930 17:45:42.429928 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:45:42 crc kubenswrapper[4688]: E0930 17:45:42.430912 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:45:47 crc kubenswrapper[4688]: I0930 17:45:47.890214 4688 scope.go:117] "RemoveContainer" containerID="4817a472d4fd71ca7c0a37dc483c2f83e41e342bb2737b7f52382438929aa0fc" Sep 30 17:45:47 crc kubenswrapper[4688]: I0930 17:45:47.911560 4688 scope.go:117] "RemoveContainer" containerID="2bde185ac7b50523c52dddadbab31b31b527ba70803af40e5a65ee7a627e1909" Sep 30 17:45:47 crc kubenswrapper[4688]: I0930 17:45:47.977005 4688 scope.go:117] "RemoveContainer" containerID="631140fbefdf517056a9847780b117a956f65248a7a92bad895e0fa2ab2cac03" Sep 30 17:45:48 crc kubenswrapper[4688]: I0930 17:45:48.003772 4688 scope.go:117] "RemoveContainer" containerID="e3d56a6086565e02522e9de4c75ff3d121574b8576364d578ead13aae1a30fe1" Sep 30 17:45:51 crc kubenswrapper[4688]: I0930 17:45:51.037822 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vxwsz"] Sep 30 17:45:51 crc kubenswrapper[4688]: I0930 17:45:51.046025 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vxwsz"] Sep 30 17:45:51 crc kubenswrapper[4688]: I0930 17:45:51.442626 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415f4a12-f67d-4bff-a041-94cc19a0abf4" path="/var/lib/kubelet/pods/415f4a12-f67d-4bff-a041-94cc19a0abf4/volumes" Sep 30 17:45:56 crc kubenswrapper[4688]: I0930 17:45:56.429198 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:45:57 crc kubenswrapper[4688]: I0930 17:45:57.654316 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c"} Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.929430 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9rwlk"] Sep 30 17:46:01 crc kubenswrapper[4688]: E0930 17:46:01.931356 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39961bba-e5a0-4319-b834-b40db142e8a8" containerName="collect-profiles" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.931387 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="39961bba-e5a0-4319-b834-b40db142e8a8" containerName="collect-profiles" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.931844 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="39961bba-e5a0-4319-b834-b40db142e8a8" containerName="collect-profiles" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.936038 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.939179 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.939319 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Sep 30 17:46:01 crc kubenswrapper[4688]: I0930 17:46:01.945298 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9rwlk"] Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.053001 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p7bwc"] Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.064962 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-p7bwc"] Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.088673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.088732 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.088764 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.088794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.089051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.089136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7kvx\" (UniqueName: \"kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.089263 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.190986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7kvx\" (UniqueName: \"kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.191167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.191274 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.191304 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.192229 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.192329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.192708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.192807 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.193091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.193638 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.197885 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.198444 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.200151 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.212374 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7kvx\" (UniqueName: \"kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx\") pod \"swift-ring-rebalance-9rwlk\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.280249 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-m4sc8" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.288262 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.768069 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9rwlk"] Sep 30 17:46:02 crc kubenswrapper[4688]: I0930 17:46:02.779529 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:46:03 crc kubenswrapper[4688]: I0930 17:46:03.455273 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da0bc04-a53e-4880-9a5f-3c6152c8e360" path="/var/lib/kubelet/pods/3da0bc04-a53e-4880-9a5f-3c6152c8e360/volumes" Sep 30 17:46:03 crc kubenswrapper[4688]: I0930 17:46:03.723440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9rwlk" event={"ID":"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd","Type":"ContainerStarted","Data":"75ecdf85f646660938344f5cf3aafcaa59c66a4852a420b6e08eb84926eb45ac"} Sep 30 17:46:06 crc kubenswrapper[4688]: I0930 17:46:06.777103 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9rwlk" event={"ID":"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd","Type":"ContainerStarted","Data":"e6517e10fc0daa92ac2176bd6dfb7e65c752eff52a4b5eb5e815b490abb1fe7e"} Sep 30 17:46:06 crc kubenswrapper[4688]: I0930 17:46:06.804315 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9rwlk" podStartSLOduration=2.815594171 podStartE2EDuration="5.804294396s" podCreationTimestamp="2025-09-30 17:46:01 +0000 UTC" firstStartedPulling="2025-09-30 17:46:02.779162038 +0000 UTC m=+1820.074599626" lastFinishedPulling="2025-09-30 17:46:05.767862293 +0000 UTC m=+1823.063299851" observedRunningTime="2025-09-30 17:46:06.800162939 +0000 UTC m=+1824.095600507" watchObservedRunningTime="2025-09-30 17:46:06.804294396 +0000 UTC m=+1824.099731964" Sep 30 17:46:09 crc kubenswrapper[4688]: I0930 17:46:09.538271 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:46:09 crc kubenswrapper[4688]: E0930 17:46:09.538619 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 17:46:09 crc kubenswrapper[4688]: E0930 17:46:09.538665 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 17:46:09 crc kubenswrapper[4688]: E0930 17:46:09.538767 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift podName:cdd2efbd-01c7-43ce-8521-e4b7de27bf28 nodeName:}" failed. No retries permitted until 2025-09-30 17:48:11.538720556 +0000 UTC m=+1948.834158114 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift") pod "swift-storage-0" (UID: "cdd2efbd-01c7-43ce-8521-e4b7de27bf28") : configmap "swift-ring-files" not found Sep 30 17:46:13 crc kubenswrapper[4688]: E0930 17:46:13.623087 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdd2efbd-01c7-43ce-8521-e4b7de27bf28" Sep 30 17:46:13 crc kubenswrapper[4688]: I0930 17:46:13.849631 4688 generic.go:334] "Generic (PLEG): container finished" podID="d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" containerID="e6517e10fc0daa92ac2176bd6dfb7e65c752eff52a4b5eb5e815b490abb1fe7e" exitCode=0 Sep 30 17:46:13 crc kubenswrapper[4688]: I0930 17:46:13.849710 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9rwlk" event={"ID":"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd","Type":"ContainerDied","Data":"e6517e10fc0daa92ac2176bd6dfb7e65c752eff52a4b5eb5e815b490abb1fe7e"} Sep 30 17:46:13 crc kubenswrapper[4688]: I0930 17:46:13.850141 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.250818 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.270356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.270687 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.270817 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.271022 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.271217 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.271329 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.271445 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7kvx\" (UniqueName: \"kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx\") pod \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\" (UID: \"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd\") " Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.272022 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.273382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.274213 4688 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.278443 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx" (OuterVolumeSpecName: "kube-api-access-x7kvx") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "kube-api-access-x7kvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.300185 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts" (OuterVolumeSpecName: "scripts") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.302871 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.307233 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.311207 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" (UID: "d8f8de37-9747-4c9d-b61b-f89ad32ba2fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.380479 4688 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-etc-swift\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.383152 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.383177 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.383189 4688 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-swiftconf\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.383200 4688 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-dispersionconf\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.383213 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7kvx\" (UniqueName: \"kubernetes.io/projected/d8f8de37-9747-4c9d-b61b-f89ad32ba2fd-kube-api-access-x7kvx\") on node \"crc\" DevicePath \"\"" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.877093 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9rwlk" event={"ID":"d8f8de37-9747-4c9d-b61b-f89ad32ba2fd","Type":"ContainerDied","Data":"75ecdf85f646660938344f5cf3aafcaa59c66a4852a420b6e08eb84926eb45ac"} Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.877143 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ecdf85f646660938344f5cf3aafcaa59c66a4852a420b6e08eb84926eb45ac" Sep 30 17:46:15 crc kubenswrapper[4688]: I0930 17:46:15.877203 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9rwlk" Sep 30 17:46:37 crc kubenswrapper[4688]: I0930 17:46:37.059130 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7pwh8"] Sep 30 17:46:37 crc kubenswrapper[4688]: I0930 17:46:37.070524 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7pwh8"] Sep 30 17:46:37 crc kubenswrapper[4688]: I0930 17:46:37.444024 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17944330-bf7d-4ae7-bb30-588fc5c65b3f" path="/var/lib/kubelet/pods/17944330-bf7d-4ae7-bb30-588fc5c65b3f/volumes" Sep 30 17:46:40 crc kubenswrapper[4688]: I0930 17:46:40.658090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:40 crc kubenswrapper[4688]: I0930 17:46:40.664871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/62277faa-c7c7-4151-b35f-f3ebe78d3ffe-etc-swift\") pod \"swift-proxy-647757d8c5-6pzkp\" (UID: \"62277faa-c7c7-4151-b35f-f3ebe78d3ffe\") " pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:40 crc kubenswrapper[4688]: I0930 17:46:40.716478 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:41 crc kubenswrapper[4688]: I0930 17:46:41.258860 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-647757d8c5-6pzkp"] Sep 30 17:46:42 crc kubenswrapper[4688]: I0930 17:46:42.150595 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-647757d8c5-6pzkp" event={"ID":"62277faa-c7c7-4151-b35f-f3ebe78d3ffe","Type":"ContainerStarted","Data":"2fc2640993924d914947852b99983dc149d48778b69923a5dae771791d52d4b1"} Sep 30 17:46:42 crc kubenswrapper[4688]: I0930 17:46:42.151114 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:42 crc kubenswrapper[4688]: I0930 17:46:42.151135 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-647757d8c5-6pzkp" event={"ID":"62277faa-c7c7-4151-b35f-f3ebe78d3ffe","Type":"ContainerStarted","Data":"48f562a3a39f769284cdfb1302b1cc9edc691f38c9c1d11550fca52939c6c97a"} Sep 30 17:46:42 crc kubenswrapper[4688]: I0930 17:46:42.151146 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-647757d8c5-6pzkp" event={"ID":"62277faa-c7c7-4151-b35f-f3ebe78d3ffe","Type":"ContainerStarted","Data":"e113a6a310cba8102f35bb0b1abae3362efed869966ccfc3bfe67c051f409c65"} Sep 30 17:46:42 crc kubenswrapper[4688]: I0930 17:46:42.189334 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-647757d8c5-6pzkp" podStartSLOduration=740.189304719 podStartE2EDuration="12m20.189304719s" podCreationTimestamp="2025-09-30 17:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:46:42.177609646 +0000 UTC m=+1859.473047204" watchObservedRunningTime="2025-09-30 17:46:42.189304719 +0000 UTC m=+1859.484742277" Sep 30 17:46:43 crc kubenswrapper[4688]: I0930 17:46:43.161308 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:48 crc kubenswrapper[4688]: I0930 17:46:48.149589 4688 scope.go:117] "RemoveContainer" containerID="d01846ea8c7d40f2eb4b276e89aa57cd66fed47a0e75890edc84c8c9f222a208" Sep 30 17:46:48 crc kubenswrapper[4688]: I0930 17:46:48.192965 4688 scope.go:117] "RemoveContainer" containerID="03f4b6ab198039e44313643be5ca13e96f3aa82ac4ec593d39bf68104b2cebe4" Sep 30 17:46:48 crc kubenswrapper[4688]: I0930 17:46:48.248892 4688 scope.go:117] "RemoveContainer" containerID="54bd07146cb7cc20ae2c4b34620108e0dd35838257e7dc889bb050b6dbb12d03" Sep 30 17:46:50 crc kubenswrapper[4688]: I0930 17:46:50.723417 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:46:50 crc kubenswrapper[4688]: I0930 17:46:50.724485 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-647757d8c5-6pzkp" Sep 30 17:48:11 crc kubenswrapper[4688]: I0930 17:48:11.638058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:48:11 crc kubenswrapper[4688]: I0930 17:48:11.647087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdd2efbd-01c7-43ce-8521-e4b7de27bf28-etc-swift\") pod \"swift-storage-0\" (UID: \"cdd2efbd-01c7-43ce-8521-e4b7de27bf28\") " pod="openstack/swift-storage-0" Sep 30 17:48:11 crc kubenswrapper[4688]: I0930 17:48:11.752696 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 17:48:12 crc kubenswrapper[4688]: I0930 17:48:12.337404 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 30 17:48:12 crc kubenswrapper[4688]: I0930 17:48:12.948782 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"8dc802afb8fbdb3c5e2def5242e2ad5430bd98e1500f11f4d3cbaf43a701f73f"} Sep 30 17:48:13 crc kubenswrapper[4688]: I0930 17:48:13.961771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"2689e62fc17febfa3e205f8f6e86a7e10f3c7f51158cfdb141cdffdc91ff51f1"} Sep 30 17:48:13 crc kubenswrapper[4688]: I0930 17:48:13.963656 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"c985035907054a05534d0146b5d670a7d8480c85503be95c153c7403a8c78105"} Sep 30 17:48:14 crc kubenswrapper[4688]: I0930 17:48:14.974910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"32ada3a5b1dc0e6d40cf46d02ae724f89693e9dda4573f1b70b3a81a43a3e974"} Sep 30 17:48:14 crc kubenswrapper[4688]: I0930 17:48:14.975210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"a110e9d63342a7bcb60fc156e82d37f70bb19aa42ff955135a52206352a2a2cd"} Sep 30 17:48:15 crc kubenswrapper[4688]: I0930 17:48:15.989497 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"6007235ad41006622472043d263ae8f7a86c2aab2b78c194d2fffe5816434a9c"} Sep 30 17:48:17 crc kubenswrapper[4688]: I0930 17:48:17.015174 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"25f22d231d965b4301a7f6978566c0fae1c3fd4b168b874c8df1f2e32d04e6d5"} Sep 30 17:48:17 crc kubenswrapper[4688]: I0930 17:48:17.015765 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"1aaae92c433d6c4ca08c590245d4e5c60122b6ae591f7daf4f496cbce959886b"} Sep 30 17:48:17 crc kubenswrapper[4688]: I0930 17:48:17.015792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"b9e87c417ca008a7ad4c673ccfe59cddfc817319a8447eb62476951c313caf6c"} Sep 30 17:48:18 crc kubenswrapper[4688]: I0930 17:48:18.031223 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"1dbeccf1e059932178d97adb101cc5bc6ff99679a36ffbb2137ecd0e4f80f09f"} Sep 30 17:48:18 crc kubenswrapper[4688]: I0930 17:48:18.031527 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"4f10613fafc293f96638aa11585dc38fbeaa1bca2c9081ab9807b3b44375f1d4"} Sep 30 17:48:18 crc kubenswrapper[4688]: I0930 17:48:18.031542 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"cce5b0784a64a52b941c90ae0512b7dcc4dbe5f4158634ca6c33f2841194d6b3"} Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.062071 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"2427d5695447720f7f7ec452f0d5072da090dc4e12243c335527ce9f7c949a1c"} Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.062629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"c4aca1f5e6e316f0bef0b357ec47e0bdcad90759faa2cfeb7cac5be5874a958d"} Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.062651 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"29248d4192df05518a0770d065ef4b3bd87afa1dd77f45cce77c28c56a05cfc7"} Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.062668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdd2efbd-01c7-43ce-8521-e4b7de27bf28","Type":"ContainerStarted","Data":"a04fe614209ae5b17b3ced05f68b6c35f9bb84f6848bd7d52ddd98c864a3ba68"} Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.117037 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=987.056121329 podStartE2EDuration="16m32.11701597s" podCreationTimestamp="2025-09-30 17:31:47 +0000 UTC" firstStartedPulling="2025-09-30 17:48:12.341686199 +0000 UTC m=+1949.637123757" lastFinishedPulling="2025-09-30 17:48:17.40258084 +0000 UTC m=+1954.698018398" observedRunningTime="2025-09-30 17:48:19.114510745 +0000 UTC m=+1956.409948303" watchObservedRunningTime="2025-09-30 17:48:19.11701597 +0000 UTC m=+1956.412453528" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.511450 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:48:19 crc kubenswrapper[4688]: E0930 17:48:19.511895 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" containerName="swift-ring-rebalance" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.511910 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" containerName="swift-ring-rebalance" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.512114 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f8de37-9747-4c9d-b61b-f89ad32ba2fd" containerName="swift-ring-rebalance" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.513157 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.514751 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.525244 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605112 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605521 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnr8t\" (UniqueName: \"kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605586 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605735 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.605780 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.707965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnr8t\" (UniqueName: \"kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.708048 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.708112 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.708164 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.708211 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.708269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.709698 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.709727 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.709997 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.710175 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.711304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.735750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnr8t\" (UniqueName: \"kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t\") pod \"dnsmasq-dns-778d7c596f-8x8m5\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:19 crc kubenswrapper[4688]: I0930 17:48:19.833670 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:20 crc kubenswrapper[4688]: I0930 17:48:20.305311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:48:20 crc kubenswrapper[4688]: W0930 17:48:20.308082 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03b603fb_d4ec_4310_a293_f9cfcbcd5495.slice/crio-b444ec1dc3d83b5a0111dbd4745454c72167ce878e0a981fb4c68b872c06b126 WatchSource:0}: Error finding container b444ec1dc3d83b5a0111dbd4745454c72167ce878e0a981fb4c68b872c06b126: Status 404 returned error can't find the container with id b444ec1dc3d83b5a0111dbd4745454c72167ce878e0a981fb4c68b872c06b126 Sep 30 17:48:21 crc kubenswrapper[4688]: I0930 17:48:21.087367 4688 generic.go:334] "Generic (PLEG): container finished" podID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerID="b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588" exitCode=0 Sep 30 17:48:21 crc kubenswrapper[4688]: I0930 17:48:21.087447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" event={"ID":"03b603fb-d4ec-4310-a293-f9cfcbcd5495","Type":"ContainerDied","Data":"b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588"} Sep 30 17:48:21 crc kubenswrapper[4688]: I0930 17:48:21.087707 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" event={"ID":"03b603fb-d4ec-4310-a293-f9cfcbcd5495","Type":"ContainerStarted","Data":"b444ec1dc3d83b5a0111dbd4745454c72167ce878e0a981fb4c68b872c06b126"} Sep 30 17:48:22 crc kubenswrapper[4688]: I0930 17:48:22.101053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" event={"ID":"03b603fb-d4ec-4310-a293-f9cfcbcd5495","Type":"ContainerStarted","Data":"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340"} Sep 30 17:48:22 crc kubenswrapper[4688]: I0930 17:48:22.101456 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:22 crc kubenswrapper[4688]: I0930 17:48:22.141554 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" podStartSLOduration=3.141525557 podStartE2EDuration="3.141525557s" podCreationTimestamp="2025-09-30 17:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:48:22.125658396 +0000 UTC m=+1959.421095964" watchObservedRunningTime="2025-09-30 17:48:22.141525557 +0000 UTC m=+1959.436963135" Sep 30 17:48:22 crc kubenswrapper[4688]: I0930 17:48:22.545680 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:48:22 crc kubenswrapper[4688]: I0930 17:48:22.545808 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:48:29 crc kubenswrapper[4688]: I0930 17:48:29.835840 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:48:29 crc kubenswrapper[4688]: I0930 17:48:29.950227 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:48:29 crc kubenswrapper[4688]: I0930 17:48:29.950716 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="dnsmasq-dns" containerID="cri-o://53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138" gracePeriod=10 Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.608152 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.664099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb6fr\" (UniqueName: \"kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr\") pod \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.664192 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb\") pod \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.664214 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb\") pod \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.664247 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc\") pod \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.664395 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config\") pod \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\" (UID: \"3ba20aa0-1673-4180-ae96-ad60a6d8732e\") " Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.671506 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr" (OuterVolumeSpecName: "kube-api-access-jb6fr") pod "3ba20aa0-1673-4180-ae96-ad60a6d8732e" (UID: "3ba20aa0-1673-4180-ae96-ad60a6d8732e"). InnerVolumeSpecName "kube-api-access-jb6fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.728323 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ba20aa0-1673-4180-ae96-ad60a6d8732e" (UID: "3ba20aa0-1673-4180-ae96-ad60a6d8732e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.729691 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ba20aa0-1673-4180-ae96-ad60a6d8732e" (UID: "3ba20aa0-1673-4180-ae96-ad60a6d8732e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.744657 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config" (OuterVolumeSpecName: "config") pod "3ba20aa0-1673-4180-ae96-ad60a6d8732e" (UID: "3ba20aa0-1673-4180-ae96-ad60a6d8732e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.754153 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ba20aa0-1673-4180-ae96-ad60a6d8732e" (UID: "3ba20aa0-1673-4180-ae96-ad60a6d8732e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.767404 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb6fr\" (UniqueName: \"kubernetes.io/projected/3ba20aa0-1673-4180-ae96-ad60a6d8732e-kube-api-access-jb6fr\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.767445 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.767457 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.767472 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:30 crc kubenswrapper[4688]: I0930 17:48:30.767483 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba20aa0-1673-4180-ae96-ad60a6d8732e-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.181151 4688 generic.go:334] "Generic (PLEG): container finished" podID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerID="53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138" exitCode=0 Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.181221 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.181249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" event={"ID":"3ba20aa0-1673-4180-ae96-ad60a6d8732e","Type":"ContainerDied","Data":"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138"} Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.181631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95bd95597-kqlzq" event={"ID":"3ba20aa0-1673-4180-ae96-ad60a6d8732e","Type":"ContainerDied","Data":"df95dadf5399e5d26761a73f4a48c7ddc6fd8f77afb29cea268976a5af552b2d"} Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.181656 4688 scope.go:117] "RemoveContainer" containerID="53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.212579 4688 scope.go:117] "RemoveContainer" containerID="84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.215694 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.222628 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95bd95597-kqlzq"] Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.236253 4688 scope.go:117] "RemoveContainer" containerID="53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138" Sep 30 17:48:31 crc kubenswrapper[4688]: E0930 17:48:31.236825 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138\": container with ID starting with 53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138 not found: ID does not exist" containerID="53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.236879 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138"} err="failed to get container status \"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138\": rpc error: code = NotFound desc = could not find container \"53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138\": container with ID starting with 53d7689e03ac1117eee808897d47d7f4a1862b002a8668241a2b024c0e760138 not found: ID does not exist" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.236904 4688 scope.go:117] "RemoveContainer" containerID="84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76" Sep 30 17:48:31 crc kubenswrapper[4688]: E0930 17:48:31.237210 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76\": container with ID starting with 84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76 not found: ID does not exist" containerID="84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.237236 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76"} err="failed to get container status \"84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76\": rpc error: code = NotFound desc = could not find container \"84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76\": container with ID starting with 84c19f5ad5aa7ed63fc34bf877dcb29f0ddb48d01e17303ad9b06a60bd187d76 not found: ID does not exist" Sep 30 17:48:31 crc kubenswrapper[4688]: I0930 17:48:31.445870 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" path="/var/lib/kubelet/pods/3ba20aa0-1673-4180-ae96-ad60a6d8732e/volumes" Sep 30 17:48:38 crc kubenswrapper[4688]: I0930 17:48:38.029441 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:39 crc kubenswrapper[4688]: I0930 17:48:39.736217 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:42 crc kubenswrapper[4688]: I0930 17:48:42.780844 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="rabbitmq" containerID="cri-o://45ec4b1e08c87e09ea5749f48aabeb5a9950badd438ff358ef821f6b233bec3a" gracePeriod=604796 Sep 30 17:48:43 crc kubenswrapper[4688]: I0930 17:48:43.931112 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="rabbitmq" containerID="cri-o://ee2346cdd1ae34be3c853bf0829ee468cea2299a295759b6eea684fa3fe690da" gracePeriod=604796 Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.342153 4688 generic.go:334] "Generic (PLEG): container finished" podID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerID="45ec4b1e08c87e09ea5749f48aabeb5a9950badd438ff358ef821f6b233bec3a" exitCode=0 Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.342227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerDied","Data":"45ec4b1e08c87e09ea5749f48aabeb5a9950badd438ff358ef821f6b233bec3a"} Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.343025 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5cfbeb46-f5ad-46f6-93e0-72c296703b5c","Type":"ContainerDied","Data":"a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188"} Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.343065 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d7b56d466ed34cf382fffaa40919250bb7432a3639040501d577b0e2810188" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.364874 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.466553 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.466922 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.466963 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xjv2\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467049 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467242 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467359 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467399 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.467429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie\") pod \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\" (UID: \"5cfbeb46-f5ad-46f6-93e0-72c296703b5c\") " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.471112 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.471139 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.475591 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.476837 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.478079 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.479219 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info" (OuterVolumeSpecName: "pod-info") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.488065 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2" (OuterVolumeSpecName: "kube-api-access-4xjv2") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "kube-api-access-4xjv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.504843 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.542061 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data" (OuterVolumeSpecName: "config-data") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.547439 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf" (OuterVolumeSpecName: "server-conf") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571253 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571288 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-server-conf\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571298 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571306 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571317 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571339 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571348 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571357 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xjv2\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-kube-api-access-4xjv2\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571366 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-pod-info\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.571375 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.586726 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5cfbeb46-f5ad-46f6-93e0-72c296703b5c" (UID: "5cfbeb46-f5ad-46f6-93e0-72c296703b5c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.597586 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.677533 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:49 crc kubenswrapper[4688]: I0930 17:48:49.677593 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cfbeb46-f5ad-46f6-93e0-72c296703b5c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.354903 4688 generic.go:334] "Generic (PLEG): container finished" podID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerID="ee2346cdd1ae34be3c853bf0829ee468cea2299a295759b6eea684fa3fe690da" exitCode=0 Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.355324 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.355047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerDied","Data":"ee2346cdd1ae34be3c853bf0829ee468cea2299a295759b6eea684fa3fe690da"} Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.407489 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.418309 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.439794 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:50 crc kubenswrapper[4688]: E0930 17:48:50.440207 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="init" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440225 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="init" Sep 30 17:48:50 crc kubenswrapper[4688]: E0930 17:48:50.440247 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="dnsmasq-dns" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440254 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="dnsmasq-dns" Sep 30 17:48:50 crc kubenswrapper[4688]: E0930 17:48:50.440262 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="rabbitmq" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440272 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="rabbitmq" Sep 30 17:48:50 crc kubenswrapper[4688]: E0930 17:48:50.440295 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="setup-container" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440301 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="setup-container" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440470 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" containerName="rabbitmq" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.440503 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba20aa0-1673-4180-ae96-ad60a6d8732e" containerName="dnsmasq-dns" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.441504 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.447937 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.448123 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.448229 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.448271 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.448380 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.448583 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9d988" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.457980 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.466549 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495268 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495390 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptl5\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-kube-api-access-wptl5\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495443 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495483 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495511 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495553 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-config-data\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495598 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495621 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.495650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.576338 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597221 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597388 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597427 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597461 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597526 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597636 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597674 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597738 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597772 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597837 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jczdd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.597866 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls\") pod \"b5a0754e-917b-4fcf-bc77-3b510479a986\" (UID: \"b5a0754e-917b-4fcf-bc77-3b510479a986\") " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptl5\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-kube-api-access-wptl5\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598330 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598410 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-config-data\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598479 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598502 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598651 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.598683 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.599122 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.602986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-config-data\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.604473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.604870 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.605328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.606161 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.606383 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.609495 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.610841 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.618782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.619779 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd" (OuterVolumeSpecName: "kube-api-access-jczdd") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "kube-api-access-jczdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.623793 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.636558 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.638959 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.640940 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.641918 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.641977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info" (OuterVolumeSpecName: "pod-info") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.642476 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.652290 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptl5\" (UniqueName: \"kubernetes.io/projected/eb8b2810-7037-4a14-8a57-b65f8d9f8af7-kube-api-access-wptl5\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.686668 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"eb8b2810-7037-4a14-8a57-b65f8d9f8af7\") " pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.691846 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data" (OuterVolumeSpecName: "config-data") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703903 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703932 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5a0754e-917b-4fcf-bc77-3b510479a986-pod-info\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703942 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703953 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703968 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703984 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5a0754e-917b-4fcf-bc77-3b510479a986-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.703996 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jczdd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-kube-api-access-jczdd\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.704006 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.704046 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.732281 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf" (OuterVolumeSpecName: "server-conf") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.735876 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.787001 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.806434 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5a0754e-917b-4fcf-bc77-3b510479a986-server-conf\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.806477 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.809957 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b5a0754e-917b-4fcf-bc77-3b510479a986" (UID: "b5a0754e-917b-4fcf-bc77-3b510479a986"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:48:50 crc kubenswrapper[4688]: I0930 17:48:50.913789 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5a0754e-917b-4fcf-bc77-3b510479a986-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.224869 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.375143 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5a0754e-917b-4fcf-bc77-3b510479a986","Type":"ContainerDied","Data":"8e31b3533244a6b07a5521795af3fe6e29ef695604b57f656567f9f166aaca4c"} Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.375211 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.375216 4688 scope.go:117] "RemoveContainer" containerID="ee2346cdd1ae34be3c853bf0829ee468cea2299a295759b6eea684fa3fe690da" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.376899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eb8b2810-7037-4a14-8a57-b65f8d9f8af7","Type":"ContainerStarted","Data":"ff5366d3aa9b2b1322a2958d0406a44b358831e2a50b9a8248e2ed4bba48fd8c"} Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.437190 4688 scope.go:117] "RemoveContainer" containerID="3c6b763ad6b225b3b081ca605beead7987f546243dd6c3b45b77255cc612473d" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.441369 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfbeb46-f5ad-46f6-93e0-72c296703b5c" path="/var/lib/kubelet/pods/5cfbeb46-f5ad-46f6-93e0-72c296703b5c/volumes" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.452399 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.458930 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.471125 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:51 crc kubenswrapper[4688]: E0930 17:48:51.471894 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="setup-container" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.478246 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="setup-container" Sep 30 17:48:51 crc kubenswrapper[4688]: E0930 17:48:51.478510 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="rabbitmq" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.478608 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="rabbitmq" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.479213 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" containerName="rabbitmq" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.481004 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.485404 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.485690 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.486590 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nzrpd" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.486757 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.486889 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.487010 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.487142 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.491941 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529519 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529613 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36c7ea93-ad69-4dda-abfe-2679e70870d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529664 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36c7ea93-ad69-4dda-abfe-2679e70870d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529682 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxdx\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-kube-api-access-5nxdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529731 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529773 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529836 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.529953 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.530033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36c7ea93-ad69-4dda-abfe-2679e70870d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632097 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36c7ea93-ad69-4dda-abfe-2679e70870d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632122 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nxdx\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-kube-api-access-5nxdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632165 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.632555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633531 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633608 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633744 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.634226 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633877 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.634288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.634119 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.633919 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.635753 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/36c7ea93-ad69-4dda-abfe-2679e70870d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.636541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/36c7ea93-ad69-4dda-abfe-2679e70870d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.636886 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/36c7ea93-ad69-4dda-abfe-2679e70870d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.639177 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.640055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.650463 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nxdx\" (UniqueName: \"kubernetes.io/projected/36c7ea93-ad69-4dda-abfe-2679e70870d8-kube-api-access-5nxdx\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.665520 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"36c7ea93-ad69-4dda-abfe-2679e70870d8\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:51 crc kubenswrapper[4688]: I0930 17:48:51.823100 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.262775 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 17:48:52 crc kubenswrapper[4688]: W0930 17:48:52.275203 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c7ea93_ad69_4dda_abfe_2679e70870d8.slice/crio-fea0a6b0c69333f64147302ff8a1dccbf357b63c95f24e5bf7f465ffab97a75f WatchSource:0}: Error finding container fea0a6b0c69333f64147302ff8a1dccbf357b63c95f24e5bf7f465ffab97a75f: Status 404 returned error can't find the container with id fea0a6b0c69333f64147302ff8a1dccbf357b63c95f24e5bf7f465ffab97a75f Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.393694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eb8b2810-7037-4a14-8a57-b65f8d9f8af7","Type":"ContainerStarted","Data":"6403529e45dc53af40a515570c2dc88fc5d4656cbc35a6ecda536603d6ff0c54"} Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.399486 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36c7ea93-ad69-4dda-abfe-2679e70870d8","Type":"ContainerStarted","Data":"fea0a6b0c69333f64147302ff8a1dccbf357b63c95f24e5bf7f465ffab97a75f"} Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.484550 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.486690 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.488930 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.502994 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.546174 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.546250 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.656440 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85rn\" (UniqueName: \"kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.656818 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.656960 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.656997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.657104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.657167 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.657195 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760188 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760222 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760242 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760298 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p85rn\" (UniqueName: \"kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.760385 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.761264 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.761269 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.761324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.765401 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.765639 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.765855 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.782482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p85rn\" (UniqueName: \"kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn\") pod \"dnsmasq-dns-74b56f867-2xzsv\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:52 crc kubenswrapper[4688]: I0930 17:48:52.810912 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:53 crc kubenswrapper[4688]: I0930 17:48:53.294359 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:48:53 crc kubenswrapper[4688]: W0930 17:48:53.294712 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8c5be6_7a5a_4a0e_809f_da849f3ebdc0.slice/crio-08a099a337a7bfaa90fd75b80c6826821ba7f2bf8701da48854efb11e34baf97 WatchSource:0}: Error finding container 08a099a337a7bfaa90fd75b80c6826821ba7f2bf8701da48854efb11e34baf97: Status 404 returned error can't find the container with id 08a099a337a7bfaa90fd75b80c6826821ba7f2bf8701da48854efb11e34baf97 Sep 30 17:48:53 crc kubenswrapper[4688]: I0930 17:48:53.409877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36c7ea93-ad69-4dda-abfe-2679e70870d8","Type":"ContainerStarted","Data":"44449b41e1761e0b58a19e2b909e936cb232e5aa3b75257cb5db655ec4cd8579"} Sep 30 17:48:53 crc kubenswrapper[4688]: I0930 17:48:53.414483 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" event={"ID":"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0","Type":"ContainerStarted","Data":"08a099a337a7bfaa90fd75b80c6826821ba7f2bf8701da48854efb11e34baf97"} Sep 30 17:48:53 crc kubenswrapper[4688]: I0930 17:48:53.452094 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a0754e-917b-4fcf-bc77-3b510479a986" path="/var/lib/kubelet/pods/b5a0754e-917b-4fcf-bc77-3b510479a986/volumes" Sep 30 17:48:54 crc kubenswrapper[4688]: I0930 17:48:54.424008 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerID="f147694a983d72e1e610cfc1bb3ffb46839a1cc678b5846c7bfdb1e1a0612c96" exitCode=0 Sep 30 17:48:54 crc kubenswrapper[4688]: I0930 17:48:54.424065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" event={"ID":"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0","Type":"ContainerDied","Data":"f147694a983d72e1e610cfc1bb3ffb46839a1cc678b5846c7bfdb1e1a0612c96"} Sep 30 17:48:55 crc kubenswrapper[4688]: I0930 17:48:55.446649 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" event={"ID":"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0","Type":"ContainerStarted","Data":"b6a587f71ab55e7fd3d218d6a3747c0e912ca8ac645206be7d8ae3ef3750a3ef"} Sep 30 17:48:55 crc kubenswrapper[4688]: I0930 17:48:55.447689 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:48:55 crc kubenswrapper[4688]: I0930 17:48:55.462259 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" podStartSLOduration=3.462238059 podStartE2EDuration="3.462238059s" podCreationTimestamp="2025-09-30 17:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:48:55.459302723 +0000 UTC m=+1992.754740291" watchObservedRunningTime="2025-09-30 17:48:55.462238059 +0000 UTC m=+1992.757675637" Sep 30 17:49:02 crc kubenswrapper[4688]: I0930 17:49:02.813061 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:49:02 crc kubenswrapper[4688]: I0930 17:49:02.892551 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:49:02 crc kubenswrapper[4688]: I0930 17:49:02.892873 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="dnsmasq-dns" containerID="cri-o://c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340" gracePeriod=10 Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.034423 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b757848d5-m4nb9"] Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.036220 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.067296 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b757848d5-m4nb9"] Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-nb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178329 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-config\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-swift-storage-0\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178400 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fb62\" (UniqueName: \"kubernetes.io/projected/7fe9be74-65ff-4cd9-8dc4-866b771262bc-kube-api-access-2fb62\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178442 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-svc\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.178468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-sb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280498 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-nb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-config\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280681 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-swift-storage-0\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fb62\" (UniqueName: \"kubernetes.io/projected/7fe9be74-65ff-4cd9-8dc4-866b771262bc-kube-api-access-2fb62\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-svc\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.280806 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-sb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.281829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-openstack-edpm-ipam\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.282117 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-swift-storage-0\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.282696 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-dns-svc\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.282967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-sb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.282982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-ovsdbserver-nb\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.283255 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fe9be74-65ff-4cd9-8dc4-866b771262bc-config\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.305208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fb62\" (UniqueName: \"kubernetes.io/projected/7fe9be74-65ff-4cd9-8dc4-866b771262bc-kube-api-access-2fb62\") pod \"dnsmasq-dns-b757848d5-m4nb9\" (UID: \"7fe9be74-65ff-4cd9-8dc4-866b771262bc\") " pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.360981 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.369377 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.484306 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.485922 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnr8t\" (UniqueName: \"kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.486061 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.486182 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.486256 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.486314 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb\") pod \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\" (UID: \"03b603fb-d4ec-4310-a293-f9cfcbcd5495\") " Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.497036 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t" (OuterVolumeSpecName: "kube-api-access-lnr8t") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "kube-api-access-lnr8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.534121 4688 generic.go:334] "Generic (PLEG): container finished" podID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerID="c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340" exitCode=0 Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.534180 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" event={"ID":"03b603fb-d4ec-4310-a293-f9cfcbcd5495","Type":"ContainerDied","Data":"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340"} Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.534216 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" event={"ID":"03b603fb-d4ec-4310-a293-f9cfcbcd5495","Type":"ContainerDied","Data":"b444ec1dc3d83b5a0111dbd4745454c72167ce878e0a981fb4c68b872c06b126"} Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.534236 4688 scope.go:117] "RemoveContainer" containerID="c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.534480 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d7c596f-8x8m5" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.550870 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config" (OuterVolumeSpecName: "config") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.564727 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.565310 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.572791 4688 scope.go:117] "RemoveContainer" containerID="b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.574329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.575911 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03b603fb-d4ec-4310-a293-f9cfcbcd5495" (UID: "03b603fb-d4ec-4310-a293-f9cfcbcd5495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589288 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589507 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnr8t\" (UniqueName: \"kubernetes.io/projected/03b603fb-d4ec-4310-a293-f9cfcbcd5495-kube-api-access-lnr8t\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589610 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589671 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589722 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.589784 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b603fb-d4ec-4310-a293-f9cfcbcd5495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.599986 4688 scope.go:117] "RemoveContainer" containerID="c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340" Sep 30 17:49:03 crc kubenswrapper[4688]: E0930 17:49:03.600666 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340\": container with ID starting with c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340 not found: ID does not exist" containerID="c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.600769 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340"} err="failed to get container status \"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340\": rpc error: code = NotFound desc = could not find container \"c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340\": container with ID starting with c5381ff619cfd5184545beb52afada9ed4578bd04ece2b5cb814d06f0f6cb340 not found: ID does not exist" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.600800 4688 scope.go:117] "RemoveContainer" containerID="b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588" Sep 30 17:49:03 crc kubenswrapper[4688]: E0930 17:49:03.601149 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588\": container with ID starting with b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588 not found: ID does not exist" containerID="b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.601193 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588"} err="failed to get container status \"b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588\": rpc error: code = NotFound desc = could not find container \"b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588\": container with ID starting with b36174bd4638a123742c6136405c01b72205d6400dd183e68f521c34c0c5a588 not found: ID does not exist" Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.881613 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b757848d5-m4nb9"] Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.891482 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:49:03 crc kubenswrapper[4688]: I0930 17:49:03.898410 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-778d7c596f-8x8m5"] Sep 30 17:49:04 crc kubenswrapper[4688]: I0930 17:49:04.544304 4688 generic.go:334] "Generic (PLEG): container finished" podID="7fe9be74-65ff-4cd9-8dc4-866b771262bc" containerID="d522756fb71e4cdeb917d8471ae08d85329d4b60c182fc9011894b42a0eb5290" exitCode=0 Sep 30 17:49:04 crc kubenswrapper[4688]: I0930 17:49:04.544408 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" event={"ID":"7fe9be74-65ff-4cd9-8dc4-866b771262bc","Type":"ContainerDied","Data":"d522756fb71e4cdeb917d8471ae08d85329d4b60c182fc9011894b42a0eb5290"} Sep 30 17:49:04 crc kubenswrapper[4688]: I0930 17:49:04.544606 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" event={"ID":"7fe9be74-65ff-4cd9-8dc4-866b771262bc","Type":"ContainerStarted","Data":"2b3c8844079b1446379e490d4d9c5bda96be8f2a780e698be83054fabf57f649"} Sep 30 17:49:05 crc kubenswrapper[4688]: I0930 17:49:05.446728 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" path="/var/lib/kubelet/pods/03b603fb-d4ec-4310-a293-f9cfcbcd5495/volumes" Sep 30 17:49:05 crc kubenswrapper[4688]: I0930 17:49:05.556415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" event={"ID":"7fe9be74-65ff-4cd9-8dc4-866b771262bc","Type":"ContainerStarted","Data":"b667fea65c7a11c05617ac61a389820b727cc1e62c35dfbcdb63054aef6b1827"} Sep 30 17:49:05 crc kubenswrapper[4688]: I0930 17:49:05.584880 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" podStartSLOduration=2.584862533 podStartE2EDuration="2.584862533s" podCreationTimestamp="2025-09-30 17:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:49:05.572962875 +0000 UTC m=+2002.868400443" watchObservedRunningTime="2025-09-30 17:49:05.584862533 +0000 UTC m=+2002.880300091" Sep 30 17:49:06 crc kubenswrapper[4688]: I0930 17:49:06.564049 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:13 crc kubenswrapper[4688]: I0930 17:49:13.362778 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b757848d5-m4nb9" Sep 30 17:49:13 crc kubenswrapper[4688]: I0930 17:49:13.428614 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:49:13 crc kubenswrapper[4688]: I0930 17:49:13.428884 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="dnsmasq-dns" containerID="cri-o://b6a587f71ab55e7fd3d218d6a3747c0e912ca8ac645206be7d8ae3ef3750a3ef" gracePeriod=10 Sep 30 17:49:13 crc kubenswrapper[4688]: I0930 17:49:13.640866 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerID="b6a587f71ab55e7fd3d218d6a3747c0e912ca8ac645206be7d8ae3ef3750a3ef" exitCode=0 Sep 30 17:49:13 crc kubenswrapper[4688]: I0930 17:49:13.640918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" event={"ID":"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0","Type":"ContainerDied","Data":"b6a587f71ab55e7fd3d218d6a3747c0e912ca8ac645206be7d8ae3ef3750a3ef"} Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.014904 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124574 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p85rn\" (UniqueName: \"kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124612 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124696 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.124754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb\") pod \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\" (UID: \"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0\") " Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.131018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn" (OuterVolumeSpecName: "kube-api-access-p85rn") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "kube-api-access-p85rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.177966 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config" (OuterVolumeSpecName: "config") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.182326 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.183286 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.184349 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.188226 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.190492 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" (UID: "7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227848 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-config\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227881 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227892 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p85rn\" (UniqueName: \"kubernetes.io/projected/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-kube-api-access-p85rn\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227901 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227910 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227918 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.227926 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.650887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" event={"ID":"7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0","Type":"ContainerDied","Data":"08a099a337a7bfaa90fd75b80c6826821ba7f2bf8701da48854efb11e34baf97"} Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.650965 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b56f867-2xzsv" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.651125 4688 scope.go:117] "RemoveContainer" containerID="b6a587f71ab55e7fd3d218d6a3747c0e912ca8ac645206be7d8ae3ef3750a3ef" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.689251 4688 scope.go:117] "RemoveContainer" containerID="f147694a983d72e1e610cfc1bb3ffb46839a1cc678b5846c7bfdb1e1a0612c96" Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.690890 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:49:14 crc kubenswrapper[4688]: I0930 17:49:14.701030 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74b56f867-2xzsv"] Sep 30 17:49:15 crc kubenswrapper[4688]: I0930 17:49:15.440780 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" path="/var/lib/kubelet/pods/7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0/volumes" Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.546074 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.546975 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.547138 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.548231 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.548420 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c" gracePeriod=600 Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.736865 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c" exitCode=0 Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.736944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c"} Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.737001 4688 scope.go:117] "RemoveContainer" containerID="94ecc6440c75c19a5ee27302e118e9d6d280adfc5787155d263b56df05b3d3b0" Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.739263 4688 generic.go:334] "Generic (PLEG): container finished" podID="eb8b2810-7037-4a14-8a57-b65f8d9f8af7" containerID="6403529e45dc53af40a515570c2dc88fc5d4656cbc35a6ecda536603d6ff0c54" exitCode=0 Sep 30 17:49:22 crc kubenswrapper[4688]: I0930 17:49:22.739302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eb8b2810-7037-4a14-8a57-b65f8d9f8af7","Type":"ContainerDied","Data":"6403529e45dc53af40a515570c2dc88fc5d4656cbc35a6ecda536603d6ff0c54"} Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.752794 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01"} Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.755507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"eb8b2810-7037-4a14-8a57-b65f8d9f8af7","Type":"ContainerStarted","Data":"0e122ce45d8b0488cdff13e319657285cceccf2b23e0af4a1c1b160e3c26f0ba"} Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.755785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.757797 4688 generic.go:334] "Generic (PLEG): container finished" podID="36c7ea93-ad69-4dda-abfe-2679e70870d8" containerID="44449b41e1761e0b58a19e2b909e936cb232e5aa3b75257cb5db655ec4cd8579" exitCode=0 Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.757841 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36c7ea93-ad69-4dda-abfe-2679e70870d8","Type":"ContainerDied","Data":"44449b41e1761e0b58a19e2b909e936cb232e5aa3b75257cb5db655ec4cd8579"} Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.826700 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=33.826681166 podStartE2EDuration="33.826681166s" podCreationTimestamp="2025-09-30 17:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:49:23.824838988 +0000 UTC m=+2021.120276546" watchObservedRunningTime="2025-09-30 17:49:23.826681166 +0000 UTC m=+2021.122118724" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.988270 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:49:23 crc kubenswrapper[4688]: E0930 17:49:23.988999 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="init" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989016 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="init" Sep 30 17:49:23 crc kubenswrapper[4688]: E0930 17:49:23.989038 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989045 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: E0930 17:49:23.989080 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="init" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989087 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="init" Sep 30 17:49:23 crc kubenswrapper[4688]: E0930 17:49:23.989099 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989105 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989297 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8c5be6-7a5a-4a0e-809f-da849f3ebdc0" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.989313 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b603fb-d4ec-4310-a293-f9cfcbcd5495" containerName="dnsmasq-dns" Sep 30 17:49:23 crc kubenswrapper[4688]: I0930 17:49:23.990658 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.008325 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.028991 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.029070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxpg\" (UniqueName: \"kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.029120 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.130743 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.130892 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.130948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxpg\" (UniqueName: \"kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.131537 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.131706 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.149507 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxpg\" (UniqueName: \"kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg\") pod \"redhat-operators-vh7rv\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.313544 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.775185 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"36c7ea93-ad69-4dda-abfe-2679e70870d8","Type":"ContainerStarted","Data":"e97c33f697978f60680c5e70cf40734ca2d60d0c8d1b9a0376eba09a3bec3d39"} Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.776720 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.804698 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=33.804558725 podStartE2EDuration="33.804558725s" podCreationTimestamp="2025-09-30 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 17:49:24.797938404 +0000 UTC m=+2022.093375972" watchObservedRunningTime="2025-09-30 17:49:24.804558725 +0000 UTC m=+2022.099996283" Sep 30 17:49:24 crc kubenswrapper[4688]: I0930 17:49:24.864780 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:49:25 crc kubenswrapper[4688]: I0930 17:49:25.800653 4688 generic.go:334] "Generic (PLEG): container finished" podID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerID="125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b" exitCode=0 Sep 30 17:49:25 crc kubenswrapper[4688]: I0930 17:49:25.800777 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerDied","Data":"125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b"} Sep 30 17:49:25 crc kubenswrapper[4688]: I0930 17:49:25.802480 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerStarted","Data":"de9b50eaa80b0db93cb02e90ab786f16b6f1c13e2f1482c9fec0d492c136003b"} Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.565878 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc"] Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.567587 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.571753 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.571776 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.571961 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.572169 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.592079 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc"] Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.684248 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgt24\" (UniqueName: \"kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.684311 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.684361 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.684620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.787090 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.788617 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.788707 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.789055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgt24\" (UniqueName: \"kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.794367 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.800054 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.801658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.817123 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgt24\" (UniqueName: \"kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:26 crc kubenswrapper[4688]: I0930 17:49:26.893031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:27 crc kubenswrapper[4688]: W0930 17:49:27.433375 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode67f930b_db0b_4d29_b9e5_5a259d6a9534.slice/crio-1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a WatchSource:0}: Error finding container 1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a: Status 404 returned error can't find the container with id 1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a Sep 30 17:49:27 crc kubenswrapper[4688]: I0930 17:49:27.443596 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc"] Sep 30 17:49:27 crc kubenswrapper[4688]: I0930 17:49:27.829477 4688 generic.go:334] "Generic (PLEG): container finished" podID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerID="a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a" exitCode=0 Sep 30 17:49:27 crc kubenswrapper[4688]: I0930 17:49:27.829620 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerDied","Data":"a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a"} Sep 30 17:49:27 crc kubenswrapper[4688]: I0930 17:49:27.850265 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" event={"ID":"e67f930b-db0b-4d29-b9e5-5a259d6a9534","Type":"ContainerStarted","Data":"1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a"} Sep 30 17:49:28 crc kubenswrapper[4688]: I0930 17:49:28.878960 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerStarted","Data":"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d"} Sep 30 17:49:28 crc kubenswrapper[4688]: I0930 17:49:28.911985 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vh7rv" podStartSLOduration=3.384069207 podStartE2EDuration="5.911950996s" podCreationTimestamp="2025-09-30 17:49:23 +0000 UTC" firstStartedPulling="2025-09-30 17:49:25.804807533 +0000 UTC m=+2023.100245091" lastFinishedPulling="2025-09-30 17:49:28.332689322 +0000 UTC m=+2025.628126880" observedRunningTime="2025-09-30 17:49:28.902667435 +0000 UTC m=+2026.198104993" watchObservedRunningTime="2025-09-30 17:49:28.911950996 +0000 UTC m=+2026.207388554" Sep 30 17:49:34 crc kubenswrapper[4688]: I0930 17:49:34.314385 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:34 crc kubenswrapper[4688]: I0930 17:49:34.314884 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:49:35 crc kubenswrapper[4688]: I0930 17:49:35.366627 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh7rv" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" probeResult="failure" output=< Sep 30 17:49:35 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 17:49:35 crc kubenswrapper[4688]: > Sep 30 17:49:37 crc kubenswrapper[4688]: I0930 17:49:37.973321 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" event={"ID":"e67f930b-db0b-4d29-b9e5-5a259d6a9534","Type":"ContainerStarted","Data":"d57c9745b03d7c8ee09ca4a3745ff1713a92aec44e10e2b72b9ec9e7a4dcffdf"} Sep 30 17:49:37 crc kubenswrapper[4688]: I0930 17:49:37.995668 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" podStartSLOduration=1.954530106 podStartE2EDuration="11.995646917s" podCreationTimestamp="2025-09-30 17:49:26 +0000 UTC" firstStartedPulling="2025-09-30 17:49:27.435914993 +0000 UTC m=+2024.731352551" lastFinishedPulling="2025-09-30 17:49:37.477031804 +0000 UTC m=+2034.772469362" observedRunningTime="2025-09-30 17:49:37.990536575 +0000 UTC m=+2035.285974133" watchObservedRunningTime="2025-09-30 17:49:37.995646917 +0000 UTC m=+2035.291084475" Sep 30 17:49:40 crc kubenswrapper[4688]: I0930 17:49:40.790781 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 30 17:49:41 crc kubenswrapper[4688]: I0930 17:49:41.826773 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 30 17:49:45 crc kubenswrapper[4688]: I0930 17:49:45.379969 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh7rv" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" probeResult="failure" output=< Sep 30 17:49:45 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 17:49:45 crc kubenswrapper[4688]: > Sep 30 17:49:48 crc kubenswrapper[4688]: I0930 17:49:48.411689 4688 scope.go:117] "RemoveContainer" containerID="45ec4b1e08c87e09ea5749f48aabeb5a9950badd438ff358ef821f6b233bec3a" Sep 30 17:49:48 crc kubenswrapper[4688]: I0930 17:49:48.432179 4688 scope.go:117] "RemoveContainer" containerID="c082befd234dc0901188426171c3b48a4b3c235fbbcf5d94c96dffbfe15ac9d6" Sep 30 17:49:49 crc kubenswrapper[4688]: I0930 17:49:49.070247 4688 generic.go:334] "Generic (PLEG): container finished" podID="e67f930b-db0b-4d29-b9e5-5a259d6a9534" containerID="d57c9745b03d7c8ee09ca4a3745ff1713a92aec44e10e2b72b9ec9e7a4dcffdf" exitCode=0 Sep 30 17:49:49 crc kubenswrapper[4688]: I0930 17:49:49.070308 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" event={"ID":"e67f930b-db0b-4d29-b9e5-5a259d6a9534","Type":"ContainerDied","Data":"d57c9745b03d7c8ee09ca4a3745ff1713a92aec44e10e2b72b9ec9e7a4dcffdf"} Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.546268 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.682246 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgt24\" (UniqueName: \"kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24\") pod \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.682346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key\") pod \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.682497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory\") pod \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.683253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle\") pod \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\" (UID: \"e67f930b-db0b-4d29-b9e5-5a259d6a9534\") " Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.688910 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24" (OuterVolumeSpecName: "kube-api-access-fgt24") pod "e67f930b-db0b-4d29-b9e5-5a259d6a9534" (UID: "e67f930b-db0b-4d29-b9e5-5a259d6a9534"). InnerVolumeSpecName "kube-api-access-fgt24". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.689896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e67f930b-db0b-4d29-b9e5-5a259d6a9534" (UID: "e67f930b-db0b-4d29-b9e5-5a259d6a9534"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.716669 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e67f930b-db0b-4d29-b9e5-5a259d6a9534" (UID: "e67f930b-db0b-4d29-b9e5-5a259d6a9534"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.718528 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory" (OuterVolumeSpecName: "inventory") pod "e67f930b-db0b-4d29-b9e5-5a259d6a9534" (UID: "e67f930b-db0b-4d29-b9e5-5a259d6a9534"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.785795 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.785830 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.785844 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgt24\" (UniqueName: \"kubernetes.io/projected/e67f930b-db0b-4d29-b9e5-5a259d6a9534-kube-api-access-fgt24\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:50 crc kubenswrapper[4688]: I0930 17:49:50.785859 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e67f930b-db0b-4d29-b9e5-5a259d6a9534-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.091039 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" event={"ID":"e67f930b-db0b-4d29-b9e5-5a259d6a9534","Type":"ContainerDied","Data":"1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a"} Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.091086 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1048c9020692825b67415cad86140249a75ccf435f2b53e42389beaecf6b668a" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.091149 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.231516 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69"] Sep 30 17:49:51 crc kubenswrapper[4688]: E0930 17:49:51.231995 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67f930b-db0b-4d29-b9e5-5a259d6a9534" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.232022 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67f930b-db0b-4d29-b9e5-5a259d6a9534" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.232275 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67f930b-db0b-4d29-b9e5-5a259d6a9534" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.233111 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.235168 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.235230 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.235873 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.244245 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.246059 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69"] Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.398418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssck\" (UniqueName: \"kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.398834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.398999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.500696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.500839 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mssck\" (UniqueName: \"kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.500933 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.507157 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.510100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.517143 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mssck\" (UniqueName: \"kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zqw69\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:51 crc kubenswrapper[4688]: I0930 17:49:51.551398 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:52 crc kubenswrapper[4688]: I0930 17:49:52.097038 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69"] Sep 30 17:49:53 crc kubenswrapper[4688]: I0930 17:49:53.110135 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" event={"ID":"f33c7eae-02eb-440e-ac3a-f16f098efd47","Type":"ContainerStarted","Data":"d76e60b7c54a37f487f51fb3b64c4dc2fca5cc5d889e91162d1b540543709c8f"} Sep 30 17:49:53 crc kubenswrapper[4688]: I0930 17:49:53.110500 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" event={"ID":"f33c7eae-02eb-440e-ac3a-f16f098efd47","Type":"ContainerStarted","Data":"169943c57e39e8f0cec627b9494ac2cb249655fb0ffd1a7ed21ed39c5185e69f"} Sep 30 17:49:53 crc kubenswrapper[4688]: I0930 17:49:53.134642 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" podStartSLOduration=1.674147547 podStartE2EDuration="2.134621114s" podCreationTimestamp="2025-09-30 17:49:51 +0000 UTC" firstStartedPulling="2025-09-30 17:49:52.111434821 +0000 UTC m=+2049.406872379" lastFinishedPulling="2025-09-30 17:49:52.571908378 +0000 UTC m=+2049.867345946" observedRunningTime="2025-09-30 17:49:53.126589366 +0000 UTC m=+2050.422026944" watchObservedRunningTime="2025-09-30 17:49:53.134621114 +0000 UTC m=+2050.430058672" Sep 30 17:49:55 crc kubenswrapper[4688]: I0930 17:49:55.356910 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vh7rv" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" probeResult="failure" output=< Sep 30 17:49:55 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 17:49:55 crc kubenswrapper[4688]: > Sep 30 17:49:56 crc kubenswrapper[4688]: I0930 17:49:56.142828 4688 generic.go:334] "Generic (PLEG): container finished" podID="f33c7eae-02eb-440e-ac3a-f16f098efd47" containerID="d76e60b7c54a37f487f51fb3b64c4dc2fca5cc5d889e91162d1b540543709c8f" exitCode=0 Sep 30 17:49:56 crc kubenswrapper[4688]: I0930 17:49:56.142884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" event={"ID":"f33c7eae-02eb-440e-ac3a-f16f098efd47","Type":"ContainerDied","Data":"d76e60b7c54a37f487f51fb3b64c4dc2fca5cc5d889e91162d1b540543709c8f"} Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.597281 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.736310 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory\") pod \"f33c7eae-02eb-440e-ac3a-f16f098efd47\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.736514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key\") pod \"f33c7eae-02eb-440e-ac3a-f16f098efd47\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.736608 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssck\" (UniqueName: \"kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck\") pod \"f33c7eae-02eb-440e-ac3a-f16f098efd47\" (UID: \"f33c7eae-02eb-440e-ac3a-f16f098efd47\") " Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.743499 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck" (OuterVolumeSpecName: "kube-api-access-mssck") pod "f33c7eae-02eb-440e-ac3a-f16f098efd47" (UID: "f33c7eae-02eb-440e-ac3a-f16f098efd47"). InnerVolumeSpecName "kube-api-access-mssck". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.773516 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f33c7eae-02eb-440e-ac3a-f16f098efd47" (UID: "f33c7eae-02eb-440e-ac3a-f16f098efd47"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.787826 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory" (OuterVolumeSpecName: "inventory") pod "f33c7eae-02eb-440e-ac3a-f16f098efd47" (UID: "f33c7eae-02eb-440e-ac3a-f16f098efd47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.840656 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.840693 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f33c7eae-02eb-440e-ac3a-f16f098efd47-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:57 crc kubenswrapper[4688]: I0930 17:49:57.840706 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mssck\" (UniqueName: \"kubernetes.io/projected/f33c7eae-02eb-440e-ac3a-f16f098efd47-kube-api-access-mssck\") on node \"crc\" DevicePath \"\"" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.164041 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" event={"ID":"f33c7eae-02eb-440e-ac3a-f16f098efd47","Type":"ContainerDied","Data":"169943c57e39e8f0cec627b9494ac2cb249655fb0ffd1a7ed21ed39c5185e69f"} Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.164097 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169943c57e39e8f0cec627b9494ac2cb249655fb0ffd1a7ed21ed39c5185e69f" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.164648 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zqw69" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.255630 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d"] Sep 30 17:49:58 crc kubenswrapper[4688]: E0930 17:49:58.256032 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33c7eae-02eb-440e-ac3a-f16f098efd47" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.256050 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33c7eae-02eb-440e-ac3a-f16f098efd47" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.256254 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33c7eae-02eb-440e-ac3a-f16f098efd47" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.256939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.259525 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.260178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.260696 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.262664 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.264695 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d"] Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.357719 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5fmt\" (UniqueName: \"kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.357839 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.357952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.357979 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.460235 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.460653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.460989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5fmt\" (UniqueName: \"kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.461390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.464911 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.465160 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.477579 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.485068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5fmt\" (UniqueName: \"kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:58 crc kubenswrapper[4688]: I0930 17:49:58.575644 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:49:59 crc kubenswrapper[4688]: I0930 17:49:59.103015 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d"] Sep 30 17:49:59 crc kubenswrapper[4688]: I0930 17:49:59.174636 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" event={"ID":"3002489d-18d9-4ab0-b09e-ab37e97ec796","Type":"ContainerStarted","Data":"16ce799b30257d4df14beb09e6627f77030c167f6bf649b1aeab88ef4e50d0ba"} Sep 30 17:50:00 crc kubenswrapper[4688]: I0930 17:50:00.188195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" event={"ID":"3002489d-18d9-4ab0-b09e-ab37e97ec796","Type":"ContainerStarted","Data":"3691a7ba3a6cb1c897f8adca002d5c3ddb3c12e6b30605481f5fa89c4d5e5c68"} Sep 30 17:50:00 crc kubenswrapper[4688]: I0930 17:50:00.218715 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" podStartSLOduration=1.830880224 podStartE2EDuration="2.218624887s" podCreationTimestamp="2025-09-30 17:49:58 +0000 UTC" firstStartedPulling="2025-09-30 17:49:59.121360315 +0000 UTC m=+2056.416797873" lastFinishedPulling="2025-09-30 17:49:59.509104978 +0000 UTC m=+2056.804542536" observedRunningTime="2025-09-30 17:50:00.205386594 +0000 UTC m=+2057.500824172" watchObservedRunningTime="2025-09-30 17:50:00.218624887 +0000 UTC m=+2057.514062485" Sep 30 17:50:04 crc kubenswrapper[4688]: I0930 17:50:04.377362 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:50:04 crc kubenswrapper[4688]: I0930 17:50:04.446404 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:50:04 crc kubenswrapper[4688]: I0930 17:50:04.620511 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.247659 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vh7rv" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" containerID="cri-o://89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d" gracePeriod=2 Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.684341 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.745732 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content\") pod \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.746173 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxpg\" (UniqueName: \"kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg\") pod \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.746204 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities\") pod \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\" (UID: \"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124\") " Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.746841 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities" (OuterVolumeSpecName: "utilities") pod "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" (UID: "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.752223 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg" (OuterVolumeSpecName: "kube-api-access-cjxpg") pod "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" (UID: "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124"). InnerVolumeSpecName "kube-api-access-cjxpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.822641 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" (UID: "5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.849073 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxpg\" (UniqueName: \"kubernetes.io/projected/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-kube-api-access-cjxpg\") on node \"crc\" DevicePath \"\"" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.849114 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:50:06 crc kubenswrapper[4688]: I0930 17:50:06.849157 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.261642 4688 generic.go:334] "Generic (PLEG): container finished" podID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerID="89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d" exitCode=0 Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.261763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerDied","Data":"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d"} Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.261840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vh7rv" event={"ID":"5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124","Type":"ContainerDied","Data":"de9b50eaa80b0db93cb02e90ab786f16b6f1c13e2f1482c9fec0d492c136003b"} Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.261871 4688 scope.go:117] "RemoveContainer" containerID="89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.261878 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vh7rv" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.282668 4688 scope.go:117] "RemoveContainer" containerID="a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.314272 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.321115 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vh7rv"] Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.328164 4688 scope.go:117] "RemoveContainer" containerID="125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.367778 4688 scope.go:117] "RemoveContainer" containerID="89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d" Sep 30 17:50:07 crc kubenswrapper[4688]: E0930 17:50:07.368173 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d\": container with ID starting with 89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d not found: ID does not exist" containerID="89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.368223 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d"} err="failed to get container status \"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d\": rpc error: code = NotFound desc = could not find container \"89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d\": container with ID starting with 89ffef7e76ecd5a830d6a7b28622eac4fbc78ce3b6e41ff9025eebbb1b85ab1d not found: ID does not exist" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.368251 4688 scope.go:117] "RemoveContainer" containerID="a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a" Sep 30 17:50:07 crc kubenswrapper[4688]: E0930 17:50:07.369166 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a\": container with ID starting with a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a not found: ID does not exist" containerID="a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.369210 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a"} err="failed to get container status \"a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a\": rpc error: code = NotFound desc = could not find container \"a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a\": container with ID starting with a0138c676305e12bbc79ceb83be90587121c17c4edd69311772a75643388fb4a not found: ID does not exist" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.369231 4688 scope.go:117] "RemoveContainer" containerID="125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b" Sep 30 17:50:07 crc kubenswrapper[4688]: E0930 17:50:07.369521 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b\": container with ID starting with 125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b not found: ID does not exist" containerID="125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.369557 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b"} err="failed to get container status \"125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b\": rpc error: code = NotFound desc = could not find container \"125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b\": container with ID starting with 125c01253577be08acd73db232da530220415fea62f6f797f7a523884c85e13b not found: ID does not exist" Sep 30 17:50:07 crc kubenswrapper[4688]: I0930 17:50:07.443073 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" path="/var/lib/kubelet/pods/5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124/volumes" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.585615 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:06 crc kubenswrapper[4688]: E0930 17:51:06.586351 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.586363 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" Sep 30 17:51:06 crc kubenswrapper[4688]: E0930 17:51:06.586383 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="extract-utilities" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.586389 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="extract-utilities" Sep 30 17:51:06 crc kubenswrapper[4688]: E0930 17:51:06.586408 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="extract-content" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.586415 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="extract-content" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.586665 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba5d4a9-6d6b-4dab-b872-d9bc2a6c4124" containerName="registry-server" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.587972 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.612228 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.774468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.774614 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clphw\" (UniqueName: \"kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.774659 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.876231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.876332 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clphw\" (UniqueName: \"kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.876370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.876899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.877967 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.898258 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clphw\" (UniqueName: \"kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw\") pod \"redhat-marketplace-rj9jd\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:06 crc kubenswrapper[4688]: I0930 17:51:06.911277 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:07 crc kubenswrapper[4688]: I0930 17:51:07.420602 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:07 crc kubenswrapper[4688]: I0930 17:51:07.830010 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerID="4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058" exitCode=0 Sep 30 17:51:07 crc kubenswrapper[4688]: I0930 17:51:07.830065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerDied","Data":"4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058"} Sep 30 17:51:07 crc kubenswrapper[4688]: I0930 17:51:07.830088 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerStarted","Data":"a547d7b2a4d26a1be7974308cdc9a985ed456839553ada8e93b5b9ca3ecc8dfe"} Sep 30 17:51:07 crc kubenswrapper[4688]: I0930 17:51:07.832204 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:51:08 crc kubenswrapper[4688]: I0930 17:51:08.982190 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:08 crc kubenswrapper[4688]: I0930 17:51:08.984354 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:08 crc kubenswrapper[4688]: I0930 17:51:08.996860 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.135775 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.136146 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.136238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b2f8\" (UniqueName: \"kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.186783 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.189551 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.201116 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.237382 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b2f8\" (UniqueName: \"kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.237739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.238097 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.238202 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.238406 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.259701 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b2f8\" (UniqueName: \"kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8\") pod \"community-operators-ckcsq\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.309434 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.340104 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.340286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5h9z\" (UniqueName: \"kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.340321 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.441614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.441686 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.441813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5h9z\" (UniqueName: \"kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.442553 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.442816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.467767 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5h9z\" (UniqueName: \"kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z\") pod \"certified-operators-p4q9w\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.518779 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:09 crc kubenswrapper[4688]: I0930 17:51:09.980433 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:09 crc kubenswrapper[4688]: W0930 17:51:09.996225 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17582640_d9bc_400e_abb5_070967ee119b.slice/crio-0f70975355cde646a39ce23256237465fcb2c2db981f618fa08dd04108a353a1 WatchSource:0}: Error finding container 0f70975355cde646a39ce23256237465fcb2c2db981f618fa08dd04108a353a1: Status 404 returned error can't find the container with id 0f70975355cde646a39ce23256237465fcb2c2db981f618fa08dd04108a353a1 Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.153116 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:10 crc kubenswrapper[4688]: W0930 17:51:10.233353 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9611611e_3678_4e95_883d_63b79b0274fd.slice/crio-6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc WatchSource:0}: Error finding container 6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc: Status 404 returned error can't find the container with id 6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.860346 4688 generic.go:334] "Generic (PLEG): container finished" podID="17582640-d9bc-400e-abb5-070967ee119b" containerID="212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21" exitCode=0 Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.860403 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerDied","Data":"212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21"} Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.860760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerStarted","Data":"0f70975355cde646a39ce23256237465fcb2c2db981f618fa08dd04108a353a1"} Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.863149 4688 generic.go:334] "Generic (PLEG): container finished" podID="9611611e-3678-4e95-883d-63b79b0274fd" containerID="125966bbebd9eefb2796e0b22d44bf3fd9cde2151f28f101b4f397224d5e0421" exitCode=0 Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.863217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerDied","Data":"125966bbebd9eefb2796e0b22d44bf3fd9cde2151f28f101b4f397224d5e0421"} Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.863255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerStarted","Data":"6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc"} Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.866477 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerID="c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7" exitCode=0 Sep 30 17:51:10 crc kubenswrapper[4688]: I0930 17:51:10.866519 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerDied","Data":"c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7"} Sep 30 17:51:11 crc kubenswrapper[4688]: I0930 17:51:11.877446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerStarted","Data":"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341"} Sep 30 17:51:11 crc kubenswrapper[4688]: I0930 17:51:11.879485 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerStarted","Data":"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7"} Sep 30 17:51:11 crc kubenswrapper[4688]: I0930 17:51:11.881895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerStarted","Data":"f7ee728c1e6aa500f552ec3d620ae657222667417dbfc3095cafd22a2081d166"} Sep 30 17:51:11 crc kubenswrapper[4688]: I0930 17:51:11.894036 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rj9jd" podStartSLOduration=2.380566832 podStartE2EDuration="5.894013609s" podCreationTimestamp="2025-09-30 17:51:06 +0000 UTC" firstStartedPulling="2025-09-30 17:51:07.831932771 +0000 UTC m=+2125.127370329" lastFinishedPulling="2025-09-30 17:51:11.345379538 +0000 UTC m=+2128.640817106" observedRunningTime="2025-09-30 17:51:11.891854223 +0000 UTC m=+2129.187291781" watchObservedRunningTime="2025-09-30 17:51:11.894013609 +0000 UTC m=+2129.189451167" Sep 30 17:51:12 crc kubenswrapper[4688]: I0930 17:51:12.892596 4688 generic.go:334] "Generic (PLEG): container finished" podID="17582640-d9bc-400e-abb5-070967ee119b" containerID="69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7" exitCode=0 Sep 30 17:51:12 crc kubenswrapper[4688]: I0930 17:51:12.892640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerDied","Data":"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7"} Sep 30 17:51:12 crc kubenswrapper[4688]: I0930 17:51:12.898841 4688 generic.go:334] "Generic (PLEG): container finished" podID="9611611e-3678-4e95-883d-63b79b0274fd" containerID="f7ee728c1e6aa500f552ec3d620ae657222667417dbfc3095cafd22a2081d166" exitCode=0 Sep 30 17:51:12 crc kubenswrapper[4688]: I0930 17:51:12.898910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerDied","Data":"f7ee728c1e6aa500f552ec3d620ae657222667417dbfc3095cafd22a2081d166"} Sep 30 17:51:13 crc kubenswrapper[4688]: I0930 17:51:13.909163 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerStarted","Data":"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b"} Sep 30 17:51:13 crc kubenswrapper[4688]: I0930 17:51:13.911690 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerStarted","Data":"907abb59bcfd7bf2707daf2d63077b7f6adb8249844c1d28ae83655c7c053270"} Sep 30 17:51:13 crc kubenswrapper[4688]: I0930 17:51:13.926326 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckcsq" podStartSLOduration=3.413893092 podStartE2EDuration="5.92630772s" podCreationTimestamp="2025-09-30 17:51:08 +0000 UTC" firstStartedPulling="2025-09-30 17:51:10.86213716 +0000 UTC m=+2128.157574718" lastFinishedPulling="2025-09-30 17:51:13.374551788 +0000 UTC m=+2130.669989346" observedRunningTime="2025-09-30 17:51:13.924416301 +0000 UTC m=+2131.219853859" watchObservedRunningTime="2025-09-30 17:51:13.92630772 +0000 UTC m=+2131.221745278" Sep 30 17:51:13 crc kubenswrapper[4688]: I0930 17:51:13.954076 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4q9w" podStartSLOduration=2.194831258 podStartE2EDuration="4.954052478s" podCreationTimestamp="2025-09-30 17:51:09 +0000 UTC" firstStartedPulling="2025-09-30 17:51:10.864539103 +0000 UTC m=+2128.159976661" lastFinishedPulling="2025-09-30 17:51:13.623760323 +0000 UTC m=+2130.919197881" observedRunningTime="2025-09-30 17:51:13.947310224 +0000 UTC m=+2131.242747792" watchObservedRunningTime="2025-09-30 17:51:13.954052478 +0000 UTC m=+2131.249490036" Sep 30 17:51:16 crc kubenswrapper[4688]: I0930 17:51:16.912655 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:16 crc kubenswrapper[4688]: I0930 17:51:16.913219 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:16 crc kubenswrapper[4688]: I0930 17:51:16.962093 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:17 crc kubenswrapper[4688]: I0930 17:51:17.021997 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.309964 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.310660 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.375980 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.376240 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rj9jd" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="registry-server" containerID="cri-o://3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341" gracePeriod=2 Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.378460 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.521066 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.521116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.567703 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.806584 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.862727 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content\") pod \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.862865 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities\") pod \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.862893 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clphw\" (UniqueName: \"kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw\") pod \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\" (UID: \"6d7ea13a-8694-40d1-bf01-3ddc422d6d76\") " Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.863780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities" (OuterVolumeSpecName: "utilities") pod "6d7ea13a-8694-40d1-bf01-3ddc422d6d76" (UID: "6d7ea13a-8694-40d1-bf01-3ddc422d6d76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.868813 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw" (OuterVolumeSpecName: "kube-api-access-clphw") pod "6d7ea13a-8694-40d1-bf01-3ddc422d6d76" (UID: "6d7ea13a-8694-40d1-bf01-3ddc422d6d76"). InnerVolumeSpecName "kube-api-access-clphw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.878875 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d7ea13a-8694-40d1-bf01-3ddc422d6d76" (UID: "6d7ea13a-8694-40d1-bf01-3ddc422d6d76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.960976 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerID="3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341" exitCode=0 Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.961065 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj9jd" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.961070 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerDied","Data":"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341"} Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.961157 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj9jd" event={"ID":"6d7ea13a-8694-40d1-bf01-3ddc422d6d76","Type":"ContainerDied","Data":"a547d7b2a4d26a1be7974308cdc9a985ed456839553ada8e93b5b9ca3ecc8dfe"} Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.961190 4688 scope.go:117] "RemoveContainer" containerID="3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.965668 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.965694 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clphw\" (UniqueName: \"kubernetes.io/projected/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-kube-api-access-clphw\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:19 crc kubenswrapper[4688]: I0930 17:51:19.965705 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d7ea13a-8694-40d1-bf01-3ddc422d6d76-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:19.991467 4688 scope.go:117] "RemoveContainer" containerID="c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.008751 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.017259 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj9jd"] Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.019259 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.028307 4688 scope.go:117] "RemoveContainer" containerID="4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.034751 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.078191 4688 scope.go:117] "RemoveContainer" containerID="3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341" Sep 30 17:51:20 crc kubenswrapper[4688]: E0930 17:51:20.078590 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341\": container with ID starting with 3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341 not found: ID does not exist" containerID="3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.078647 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341"} err="failed to get container status \"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341\": rpc error: code = NotFound desc = could not find container \"3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341\": container with ID starting with 3f66d4e2b6a9e9e14eea6c4a79eb4c6d0d7dfe8f1f91377a4444a6ba8e916341 not found: ID does not exist" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.078681 4688 scope.go:117] "RemoveContainer" containerID="c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7" Sep 30 17:51:20 crc kubenswrapper[4688]: E0930 17:51:20.079007 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7\": container with ID starting with c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7 not found: ID does not exist" containerID="c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.079046 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7"} err="failed to get container status \"c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7\": rpc error: code = NotFound desc = could not find container \"c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7\": container with ID starting with c48199e0ab55d51c8ee0f7b53a0960f11804c259b4a0c020d975ef4534a4bfc7 not found: ID does not exist" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.079072 4688 scope.go:117] "RemoveContainer" containerID="4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058" Sep 30 17:51:20 crc kubenswrapper[4688]: E0930 17:51:20.079304 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058\": container with ID starting with 4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058 not found: ID does not exist" containerID="4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058" Sep 30 17:51:20 crc kubenswrapper[4688]: I0930 17:51:20.079335 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058"} err="failed to get container status \"4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058\": rpc error: code = NotFound desc = could not find container \"4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058\": container with ID starting with 4e60792e0fe746985177a9b8ff0ab820d4b8f8cbd3bbdfedc46aaf7f567f9058 not found: ID does not exist" Sep 30 17:51:21 crc kubenswrapper[4688]: I0930 17:51:21.442046 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" path="/var/lib/kubelet/pods/6d7ea13a-8694-40d1-bf01-3ddc422d6d76/volumes" Sep 30 17:51:21 crc kubenswrapper[4688]: I0930 17:51:21.774964 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:21 crc kubenswrapper[4688]: I0930 17:51:21.977801 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ckcsq" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="registry-server" containerID="cri-o://180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b" gracePeriod=2 Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.443633 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.512291 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b2f8\" (UniqueName: \"kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8\") pod \"17582640-d9bc-400e-abb5-070967ee119b\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.512472 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content\") pod \"17582640-d9bc-400e-abb5-070967ee119b\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.512555 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities\") pod \"17582640-d9bc-400e-abb5-070967ee119b\" (UID: \"17582640-d9bc-400e-abb5-070967ee119b\") " Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.513528 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities" (OuterVolumeSpecName: "utilities") pod "17582640-d9bc-400e-abb5-070967ee119b" (UID: "17582640-d9bc-400e-abb5-070967ee119b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.520695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8" (OuterVolumeSpecName: "kube-api-access-5b2f8") pod "17582640-d9bc-400e-abb5-070967ee119b" (UID: "17582640-d9bc-400e-abb5-070967ee119b"). InnerVolumeSpecName "kube-api-access-5b2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.546361 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.546483 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.568390 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17582640-d9bc-400e-abb5-070967ee119b" (UID: "17582640-d9bc-400e-abb5-070967ee119b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.614830 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b2f8\" (UniqueName: \"kubernetes.io/projected/17582640-d9bc-400e-abb5-070967ee119b-kube-api-access-5b2f8\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.614860 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.614869 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17582640-d9bc-400e-abb5-070967ee119b-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.999713 4688 generic.go:334] "Generic (PLEG): container finished" podID="17582640-d9bc-400e-abb5-070967ee119b" containerID="180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b" exitCode=0 Sep 30 17:51:22 crc kubenswrapper[4688]: I0930 17:51:22.999772 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerDied","Data":"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b"} Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:22.999812 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckcsq" event={"ID":"17582640-d9bc-400e-abb5-070967ee119b","Type":"ContainerDied","Data":"0f70975355cde646a39ce23256237465fcb2c2db981f618fa08dd04108a353a1"} Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:22.999836 4688 scope.go:117] "RemoveContainer" containerID="180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:22.999834 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckcsq" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.026436 4688 scope.go:117] "RemoveContainer" containerID="69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.044474 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.051383 4688 scope.go:117] "RemoveContainer" containerID="212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.057656 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ckcsq"] Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.093800 4688 scope.go:117] "RemoveContainer" containerID="180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b" Sep 30 17:51:23 crc kubenswrapper[4688]: E0930 17:51:23.094312 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b\": container with ID starting with 180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b not found: ID does not exist" containerID="180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.094352 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b"} err="failed to get container status \"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b\": rpc error: code = NotFound desc = could not find container \"180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b\": container with ID starting with 180ad7d699743b64f64c627854e3121f2bda8a6b273b46edb4210c1a1731ca0b not found: ID does not exist" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.094379 4688 scope.go:117] "RemoveContainer" containerID="69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7" Sep 30 17:51:23 crc kubenswrapper[4688]: E0930 17:51:23.094990 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7\": container with ID starting with 69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7 not found: ID does not exist" containerID="69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.095041 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7"} err="failed to get container status \"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7\": rpc error: code = NotFound desc = could not find container \"69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7\": container with ID starting with 69aa2dda55e6462c3f69c7815a49a6513fc6710627b052fdd2f3ff51cb9ff5e7 not found: ID does not exist" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.095062 4688 scope.go:117] "RemoveContainer" containerID="212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21" Sep 30 17:51:23 crc kubenswrapper[4688]: E0930 17:51:23.095358 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21\": container with ID starting with 212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21 not found: ID does not exist" containerID="212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.095394 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21"} err="failed to get container status \"212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21\": rpc error: code = NotFound desc = could not find container \"212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21\": container with ID starting with 212134e93c9b0b28c0ed9321835cd1b5884a41474662fc44e4efe29d080ede21 not found: ID does not exist" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.442638 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17582640-d9bc-400e-abb5-070967ee119b" path="/var/lib/kubelet/pods/17582640-d9bc-400e-abb5-070967ee119b/volumes" Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.575115 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:23 crc kubenswrapper[4688]: I0930 17:51:23.575333 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4q9w" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="registry-server" containerID="cri-o://907abb59bcfd7bf2707daf2d63077b7f6adb8249844c1d28ae83655c7c053270" gracePeriod=2 Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.010118 4688 generic.go:334] "Generic (PLEG): container finished" podID="9611611e-3678-4e95-883d-63b79b0274fd" containerID="907abb59bcfd7bf2707daf2d63077b7f6adb8249844c1d28ae83655c7c053270" exitCode=0 Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.010194 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerDied","Data":"907abb59bcfd7bf2707daf2d63077b7f6adb8249844c1d28ae83655c7c053270"} Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.010232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4q9w" event={"ID":"9611611e-3678-4e95-883d-63b79b0274fd","Type":"ContainerDied","Data":"6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc"} Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.010242 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6581acb274d6adb68194c3b5aa5371dcbdbdbd647e5b0cb915a5c1b46a3a8dfc" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.029226 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.041809 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content\") pod \"9611611e-3678-4e95-883d-63b79b0274fd\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.041989 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities\") pod \"9611611e-3678-4e95-883d-63b79b0274fd\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.042110 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5h9z\" (UniqueName: \"kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z\") pod \"9611611e-3678-4e95-883d-63b79b0274fd\" (UID: \"9611611e-3678-4e95-883d-63b79b0274fd\") " Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.043614 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities" (OuterVolumeSpecName: "utilities") pod "9611611e-3678-4e95-883d-63b79b0274fd" (UID: "9611611e-3678-4e95-883d-63b79b0274fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.050714 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z" (OuterVolumeSpecName: "kube-api-access-k5h9z") pod "9611611e-3678-4e95-883d-63b79b0274fd" (UID: "9611611e-3678-4e95-883d-63b79b0274fd"). InnerVolumeSpecName "kube-api-access-k5h9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.093312 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9611611e-3678-4e95-883d-63b79b0274fd" (UID: "9611611e-3678-4e95-883d-63b79b0274fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.148471 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.148526 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9611611e-3678-4e95-883d-63b79b0274fd-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:24 crc kubenswrapper[4688]: I0930 17:51:24.148544 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5h9z\" (UniqueName: \"kubernetes.io/projected/9611611e-3678-4e95-883d-63b79b0274fd-kube-api-access-k5h9z\") on node \"crc\" DevicePath \"\"" Sep 30 17:51:25 crc kubenswrapper[4688]: I0930 17:51:25.019393 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4q9w" Sep 30 17:51:25 crc kubenswrapper[4688]: I0930 17:51:25.055567 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:25 crc kubenswrapper[4688]: I0930 17:51:25.063307 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4q9w"] Sep 30 17:51:25 crc kubenswrapper[4688]: I0930 17:51:25.442052 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9611611e-3678-4e95-883d-63b79b0274fd" path="/var/lib/kubelet/pods/9611611e-3678-4e95-883d-63b79b0274fd/volumes" Sep 30 17:51:52 crc kubenswrapper[4688]: I0930 17:51:52.545721 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:51:52 crc kubenswrapper[4688]: I0930 17:51:52.546365 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:52:22 crc kubenswrapper[4688]: I0930 17:52:22.545776 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:52:22 crc kubenswrapper[4688]: I0930 17:52:22.546264 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 17:52:22 crc kubenswrapper[4688]: I0930 17:52:22.546382 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 17:52:22 crc kubenswrapper[4688]: I0930 17:52:22.547643 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 17:52:22 crc kubenswrapper[4688]: I0930 17:52:22.547761 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" gracePeriod=600 Sep 30 17:52:22 crc kubenswrapper[4688]: E0930 17:52:22.681521 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:52:23 crc kubenswrapper[4688]: I0930 17:52:23.528207 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" exitCode=0 Sep 30 17:52:23 crc kubenswrapper[4688]: I0930 17:52:23.528255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01"} Sep 30 17:52:23 crc kubenswrapper[4688]: I0930 17:52:23.528298 4688 scope.go:117] "RemoveContainer" containerID="2b77f55d7b58dbf58878e2a71bf48c68897966db1af74b1ec851794bc9fa339c" Sep 30 17:52:23 crc kubenswrapper[4688]: I0930 17:52:23.528961 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:52:23 crc kubenswrapper[4688]: E0930 17:52:23.529275 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:52:37 crc kubenswrapper[4688]: I0930 17:52:37.430541 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:52:37 crc kubenswrapper[4688]: E0930 17:52:37.431763 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:52:51 crc kubenswrapper[4688]: I0930 17:52:51.431160 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:52:51 crc kubenswrapper[4688]: E0930 17:52:51.432657 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:52:57 crc kubenswrapper[4688]: I0930 17:52:57.898494 4688 generic.go:334] "Generic (PLEG): container finished" podID="3002489d-18d9-4ab0-b09e-ab37e97ec796" containerID="3691a7ba3a6cb1c897f8adca002d5c3ddb3c12e6b30605481f5fa89c4d5e5c68" exitCode=0 Sep 30 17:52:57 crc kubenswrapper[4688]: I0930 17:52:57.898596 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" event={"ID":"3002489d-18d9-4ab0-b09e-ab37e97ec796","Type":"ContainerDied","Data":"3691a7ba3a6cb1c897f8adca002d5c3ddb3c12e6b30605481f5fa89c4d5e5c68"} Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.341939 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.474918 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle\") pod \"3002489d-18d9-4ab0-b09e-ab37e97ec796\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.475021 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5fmt\" (UniqueName: \"kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt\") pod \"3002489d-18d9-4ab0-b09e-ab37e97ec796\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.475091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key\") pod \"3002489d-18d9-4ab0-b09e-ab37e97ec796\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.475224 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory\") pod \"3002489d-18d9-4ab0-b09e-ab37e97ec796\" (UID: \"3002489d-18d9-4ab0-b09e-ab37e97ec796\") " Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.481805 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3002489d-18d9-4ab0-b09e-ab37e97ec796" (UID: "3002489d-18d9-4ab0-b09e-ab37e97ec796"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.483531 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt" (OuterVolumeSpecName: "kube-api-access-x5fmt") pod "3002489d-18d9-4ab0-b09e-ab37e97ec796" (UID: "3002489d-18d9-4ab0-b09e-ab37e97ec796"). InnerVolumeSpecName "kube-api-access-x5fmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.502963 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3002489d-18d9-4ab0-b09e-ab37e97ec796" (UID: "3002489d-18d9-4ab0-b09e-ab37e97ec796"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.509410 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory" (OuterVolumeSpecName: "inventory") pod "3002489d-18d9-4ab0-b09e-ab37e97ec796" (UID: "3002489d-18d9-4ab0-b09e-ab37e97ec796"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.580186 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.580238 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5fmt\" (UniqueName: \"kubernetes.io/projected/3002489d-18d9-4ab0-b09e-ab37e97ec796-kube-api-access-x5fmt\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.580250 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.580265 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3002489d-18d9-4ab0-b09e-ab37e97ec796-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.925330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" event={"ID":"3002489d-18d9-4ab0-b09e-ab37e97ec796","Type":"ContainerDied","Data":"16ce799b30257d4df14beb09e6627f77030c167f6bf649b1aeab88ef4e50d0ba"} Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.925689 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ce799b30257d4df14beb09e6627f77030c167f6bf649b1aeab88ef4e50d0ba" Sep 30 17:52:59 crc kubenswrapper[4688]: I0930 17:52:59.925474 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.015678 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj"] Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016208 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016232 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016253 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016269 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016304 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016316 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016330 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016338 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016362 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016373 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016397 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016407 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016421 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016432 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="extract-utilities" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016451 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016464 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016481 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3002489d-18d9-4ab0-b09e-ab37e97ec796" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016491 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3002489d-18d9-4ab0-b09e-ab37e97ec796" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 17:53:00 crc kubenswrapper[4688]: E0930 17:53:00.016507 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016515 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="extract-content" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016803 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3002489d-18d9-4ab0-b09e-ab37e97ec796" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016820 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9611611e-3678-4e95-883d-63b79b0274fd" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016845 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="17582640-d9bc-400e-abb5-070967ee119b" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.016871 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d7ea13a-8694-40d1-bf01-3ddc422d6d76" containerName="registry-server" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.017672 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.025551 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.025795 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.025973 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.026128 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.031425 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj"] Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.092882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.092990 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7k9d\" (UniqueName: \"kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.093017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.194936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7k9d\" (UniqueName: \"kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.194983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.195147 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.199384 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.199625 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.215296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7k9d\" (UniqueName: \"kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h8grj\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.374123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:53:00 crc kubenswrapper[4688]: I0930 17:53:00.930216 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj"] Sep 30 17:53:01 crc kubenswrapper[4688]: I0930 17:53:01.945425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" event={"ID":"7f941024-be27-4d1b-8fa3-3d8f1ff434ba","Type":"ContainerStarted","Data":"e82c62112c09d1ba358acc3402f02d989989c63d67cd29b1bfd370be5d16e9ce"} Sep 30 17:53:01 crc kubenswrapper[4688]: I0930 17:53:01.945814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" event={"ID":"7f941024-be27-4d1b-8fa3-3d8f1ff434ba","Type":"ContainerStarted","Data":"fe3105dbd1c88242bdf4548948a403f72adecfba928af13cfcf3f1e427838ec4"} Sep 30 17:53:01 crc kubenswrapper[4688]: I0930 17:53:01.962544 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" podStartSLOduration=2.403414298 podStartE2EDuration="2.96252403s" podCreationTimestamp="2025-09-30 17:52:59 +0000 UTC" firstStartedPulling="2025-09-30 17:53:00.944375478 +0000 UTC m=+2238.239813036" lastFinishedPulling="2025-09-30 17:53:01.5034852 +0000 UTC m=+2238.798922768" observedRunningTime="2025-09-30 17:53:01.958965318 +0000 UTC m=+2239.254402896" watchObservedRunningTime="2025-09-30 17:53:01.96252403 +0000 UTC m=+2239.257961578" Sep 30 17:53:03 crc kubenswrapper[4688]: I0930 17:53:03.435221 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:53:03 crc kubenswrapper[4688]: E0930 17:53:03.435656 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:53:18 crc kubenswrapper[4688]: I0930 17:53:18.429530 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:53:18 crc kubenswrapper[4688]: E0930 17:53:18.430420 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:53:33 crc kubenswrapper[4688]: I0930 17:53:33.435033 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:53:33 crc kubenswrapper[4688]: E0930 17:53:33.435961 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:53:48 crc kubenswrapper[4688]: I0930 17:53:48.429987 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:53:48 crc kubenswrapper[4688]: E0930 17:53:48.430673 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:54:02 crc kubenswrapper[4688]: I0930 17:54:02.429848 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:54:02 crc kubenswrapper[4688]: E0930 17:54:02.430591 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:54:17 crc kubenswrapper[4688]: I0930 17:54:17.434924 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:54:17 crc kubenswrapper[4688]: E0930 17:54:17.441372 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:54:31 crc kubenswrapper[4688]: I0930 17:54:31.788553 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f941024-be27-4d1b-8fa3-3d8f1ff434ba" containerID="e82c62112c09d1ba358acc3402f02d989989c63d67cd29b1bfd370be5d16e9ce" exitCode=0 Sep 30 17:54:31 crc kubenswrapper[4688]: I0930 17:54:31.788706 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" event={"ID":"7f941024-be27-4d1b-8fa3-3d8f1ff434ba","Type":"ContainerDied","Data":"e82c62112c09d1ba358acc3402f02d989989c63d67cd29b1bfd370be5d16e9ce"} Sep 30 17:54:32 crc kubenswrapper[4688]: I0930 17:54:32.429162 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:54:32 crc kubenswrapper[4688]: E0930 17:54:32.429723 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.257150 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.318863 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key\") pod \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.319187 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory\") pod \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.319239 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7k9d\" (UniqueName: \"kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d\") pod \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\" (UID: \"7f941024-be27-4d1b-8fa3-3d8f1ff434ba\") " Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.324860 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d" (OuterVolumeSpecName: "kube-api-access-b7k9d") pod "7f941024-be27-4d1b-8fa3-3d8f1ff434ba" (UID: "7f941024-be27-4d1b-8fa3-3d8f1ff434ba"). InnerVolumeSpecName "kube-api-access-b7k9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.346274 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory" (OuterVolumeSpecName: "inventory") pod "7f941024-be27-4d1b-8fa3-3d8f1ff434ba" (UID: "7f941024-be27-4d1b-8fa3-3d8f1ff434ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.359990 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7f941024-be27-4d1b-8fa3-3d8f1ff434ba" (UID: "7f941024-be27-4d1b-8fa3-3d8f1ff434ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.421400 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.421442 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7k9d\" (UniqueName: \"kubernetes.io/projected/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-kube-api-access-b7k9d\") on node \"crc\" DevicePath \"\"" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.421456 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f941024-be27-4d1b-8fa3-3d8f1ff434ba-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.812766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" event={"ID":"7f941024-be27-4d1b-8fa3-3d8f1ff434ba","Type":"ContainerDied","Data":"fe3105dbd1c88242bdf4548948a403f72adecfba928af13cfcf3f1e427838ec4"} Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.812819 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe3105dbd1c88242bdf4548948a403f72adecfba928af13cfcf3f1e427838ec4" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.812881 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h8grj" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.913332 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s"] Sep 30 17:54:33 crc kubenswrapper[4688]: E0930 17:54:33.914055 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f941024-be27-4d1b-8fa3-3d8f1ff434ba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.914215 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f941024-be27-4d1b-8fa3-3d8f1ff434ba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.914691 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f941024-be27-4d1b-8fa3-3d8f1ff434ba" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.916187 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.918700 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.919340 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.919534 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.919747 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:54:33 crc kubenswrapper[4688]: I0930 17:54:33.923151 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s"] Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.038531 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.038644 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.038817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbwg\" (UniqueName: \"kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.140066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.140448 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.140670 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbwg\" (UniqueName: \"kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.151736 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.151807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.157618 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbwg\" (UniqueName: \"kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z475s\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.288978 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.784063 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s"] Sep 30 17:54:34 crc kubenswrapper[4688]: I0930 17:54:34.821513 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" event={"ID":"4b7f1d4c-3151-4220-87e0-84fb8dff5710","Type":"ContainerStarted","Data":"c18d360c4eab9acbb69e70ecf501ca8d084cdbb7b42ae92bce40d589a2e7924c"} Sep 30 17:54:35 crc kubenswrapper[4688]: I0930 17:54:35.832720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" event={"ID":"4b7f1d4c-3151-4220-87e0-84fb8dff5710","Type":"ContainerStarted","Data":"8aa1ba143f8c4d552da97374da9e72d8bea5134bd758eb72376bd6bff4ba5cf9"} Sep 30 17:54:35 crc kubenswrapper[4688]: I0930 17:54:35.847889 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" podStartSLOduration=2.376859753 podStartE2EDuration="2.847871915s" podCreationTimestamp="2025-09-30 17:54:33 +0000 UTC" firstStartedPulling="2025-09-30 17:54:34.789710392 +0000 UTC m=+2332.085147950" lastFinishedPulling="2025-09-30 17:54:35.260722564 +0000 UTC m=+2332.556160112" observedRunningTime="2025-09-30 17:54:35.847008832 +0000 UTC m=+2333.142446410" watchObservedRunningTime="2025-09-30 17:54:35.847871915 +0000 UTC m=+2333.143309493" Sep 30 17:54:43 crc kubenswrapper[4688]: I0930 17:54:43.442340 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:54:43 crc kubenswrapper[4688]: E0930 17:54:43.443391 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:54:57 crc kubenswrapper[4688]: I0930 17:54:57.430542 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:54:57 crc kubenswrapper[4688]: E0930 17:54:57.431306 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:55:11 crc kubenswrapper[4688]: I0930 17:55:11.429541 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:55:11 crc kubenswrapper[4688]: E0930 17:55:11.431056 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:55:23 crc kubenswrapper[4688]: I0930 17:55:23.436493 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:55:23 crc kubenswrapper[4688]: E0930 17:55:23.437702 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:55:36 crc kubenswrapper[4688]: I0930 17:55:36.429532 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:55:36 crc kubenswrapper[4688]: E0930 17:55:36.430684 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:55:48 crc kubenswrapper[4688]: I0930 17:55:48.493890 4688 generic.go:334] "Generic (PLEG): container finished" podID="4b7f1d4c-3151-4220-87e0-84fb8dff5710" containerID="8aa1ba143f8c4d552da97374da9e72d8bea5134bd758eb72376bd6bff4ba5cf9" exitCode=0 Sep 30 17:55:48 crc kubenswrapper[4688]: I0930 17:55:48.494071 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" event={"ID":"4b7f1d4c-3151-4220-87e0-84fb8dff5710","Type":"ContainerDied","Data":"8aa1ba143f8c4d552da97374da9e72d8bea5134bd758eb72376bd6bff4ba5cf9"} Sep 30 17:55:49 crc kubenswrapper[4688]: I0930 17:55:49.902493 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:55:49 crc kubenswrapper[4688]: I0930 17:55:49.967062 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key\") pod \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " Sep 30 17:55:49 crc kubenswrapper[4688]: I0930 17:55:49.967471 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory\") pod \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " Sep 30 17:55:49 crc kubenswrapper[4688]: I0930 17:55:49.996344 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4b7f1d4c-3151-4220-87e0-84fb8dff5710" (UID: "4b7f1d4c-3151-4220-87e0-84fb8dff5710"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:55:49 crc kubenswrapper[4688]: I0930 17:55:49.999272 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory" (OuterVolumeSpecName: "inventory") pod "4b7f1d4c-3151-4220-87e0-84fb8dff5710" (UID: "4b7f1d4c-3151-4220-87e0-84fb8dff5710"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.070100 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dbwg\" (UniqueName: \"kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg\") pod \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\" (UID: \"4b7f1d4c-3151-4220-87e0-84fb8dff5710\") " Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.070962 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.071004 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b7f1d4c-3151-4220-87e0-84fb8dff5710-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.073342 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg" (OuterVolumeSpecName: "kube-api-access-6dbwg") pod "4b7f1d4c-3151-4220-87e0-84fb8dff5710" (UID: "4b7f1d4c-3151-4220-87e0-84fb8dff5710"). InnerVolumeSpecName "kube-api-access-6dbwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.172536 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dbwg\" (UniqueName: \"kubernetes.io/projected/4b7f1d4c-3151-4220-87e0-84fb8dff5710-kube-api-access-6dbwg\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.430759 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:55:50 crc kubenswrapper[4688]: E0930 17:55:50.431208 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.514654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" event={"ID":"4b7f1d4c-3151-4220-87e0-84fb8dff5710","Type":"ContainerDied","Data":"c18d360c4eab9acbb69e70ecf501ca8d084cdbb7b42ae92bce40d589a2e7924c"} Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.514720 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z475s" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.514731 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18d360c4eab9acbb69e70ecf501ca8d084cdbb7b42ae92bce40d589a2e7924c" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.646840 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq"] Sep 30 17:55:50 crc kubenswrapper[4688]: E0930 17:55:50.647313 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7f1d4c-3151-4220-87e0-84fb8dff5710" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.647336 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7f1d4c-3151-4220-87e0-84fb8dff5710" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.647597 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7f1d4c-3151-4220-87e0-84fb8dff5710" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.648322 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.651261 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.651510 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.651673 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.651842 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.658165 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq"] Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.782817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.782916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.783000 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65bpc\" (UniqueName: \"kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.884819 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.884914 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.885002 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65bpc\" (UniqueName: \"kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.889912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.890362 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.905755 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65bpc\" (UniqueName: \"kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:50 crc kubenswrapper[4688]: I0930 17:55:50.965945 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:51 crc kubenswrapper[4688]: I0930 17:55:51.460939 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq"] Sep 30 17:55:51 crc kubenswrapper[4688]: I0930 17:55:51.522634 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" event={"ID":"4bb29aec-2e71-4505-b1d0-e79b6aaecceb","Type":"ContainerStarted","Data":"122dc94184e411ac5b92d1cd5bf5ec5d177279cf037881a69ae018fa7d09fd9b"} Sep 30 17:55:52 crc kubenswrapper[4688]: I0930 17:55:52.532178 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" event={"ID":"4bb29aec-2e71-4505-b1d0-e79b6aaecceb","Type":"ContainerStarted","Data":"92471f201d9ce007c419da254cca1bdde6b56641a906347e6c229dac33b5461f"} Sep 30 17:55:52 crc kubenswrapper[4688]: I0930 17:55:52.561184 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" podStartSLOduration=1.8450567329999998 podStartE2EDuration="2.561150473s" podCreationTimestamp="2025-09-30 17:55:50 +0000 UTC" firstStartedPulling="2025-09-30 17:55:51.469260538 +0000 UTC m=+2408.764698096" lastFinishedPulling="2025-09-30 17:55:52.185354268 +0000 UTC m=+2409.480791836" observedRunningTime="2025-09-30 17:55:52.553350521 +0000 UTC m=+2409.848788099" watchObservedRunningTime="2025-09-30 17:55:52.561150473 +0000 UTC m=+2409.856588031" Sep 30 17:55:57 crc kubenswrapper[4688]: I0930 17:55:57.574733 4688 generic.go:334] "Generic (PLEG): container finished" podID="4bb29aec-2e71-4505-b1d0-e79b6aaecceb" containerID="92471f201d9ce007c419da254cca1bdde6b56641a906347e6c229dac33b5461f" exitCode=0 Sep 30 17:55:57 crc kubenswrapper[4688]: I0930 17:55:57.574821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" event={"ID":"4bb29aec-2e71-4505-b1d0-e79b6aaecceb","Type":"ContainerDied","Data":"92471f201d9ce007c419da254cca1bdde6b56641a906347e6c229dac33b5461f"} Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.013513 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.071602 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65bpc\" (UniqueName: \"kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc\") pod \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.071712 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory\") pod \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.071775 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key\") pod \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\" (UID: \"4bb29aec-2e71-4505-b1d0-e79b6aaecceb\") " Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.082800 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc" (OuterVolumeSpecName: "kube-api-access-65bpc") pod "4bb29aec-2e71-4505-b1d0-e79b6aaecceb" (UID: "4bb29aec-2e71-4505-b1d0-e79b6aaecceb"). InnerVolumeSpecName "kube-api-access-65bpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.101312 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4bb29aec-2e71-4505-b1d0-e79b6aaecceb" (UID: "4bb29aec-2e71-4505-b1d0-e79b6aaecceb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.103844 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory" (OuterVolumeSpecName: "inventory") pod "4bb29aec-2e71-4505-b1d0-e79b6aaecceb" (UID: "4bb29aec-2e71-4505-b1d0-e79b6aaecceb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.172910 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.172944 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65bpc\" (UniqueName: \"kubernetes.io/projected/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-kube-api-access-65bpc\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.172956 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bb29aec-2e71-4505-b1d0-e79b6aaecceb-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.594501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" event={"ID":"4bb29aec-2e71-4505-b1d0-e79b6aaecceb","Type":"ContainerDied","Data":"122dc94184e411ac5b92d1cd5bf5ec5d177279cf037881a69ae018fa7d09fd9b"} Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.594555 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122dc94184e411ac5b92d1cd5bf5ec5d177279cf037881a69ae018fa7d09fd9b" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.594631 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.669362 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs"] Sep 30 17:55:59 crc kubenswrapper[4688]: E0930 17:55:59.669793 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb29aec-2e71-4505-b1d0-e79b6aaecceb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.669818 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb29aec-2e71-4505-b1d0-e79b6aaecceb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.670064 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb29aec-2e71-4505-b1d0-e79b6aaecceb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.670695 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.673441 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.674396 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.674967 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.678417 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.681311 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs"] Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.682059 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.682125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sms7x\" (UniqueName: \"kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.682326 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.784177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.784270 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sms7x\" (UniqueName: \"kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.784720 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.788174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.790312 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.818556 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sms7x\" (UniqueName: \"kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-64mcs\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:55:59 crc kubenswrapper[4688]: I0930 17:55:59.988596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:56:00 crc kubenswrapper[4688]: I0930 17:56:00.512285 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs"] Sep 30 17:56:00 crc kubenswrapper[4688]: I0930 17:56:00.605110 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" event={"ID":"a39b16b2-47a0-4014-a6d5-286fb2daa1a2","Type":"ContainerStarted","Data":"0b5a81d4b552c86e1e5f27751f358fc027653c628a5929df06b08249522d5878"} Sep 30 17:56:01 crc kubenswrapper[4688]: I0930 17:56:01.616861 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" event={"ID":"a39b16b2-47a0-4014-a6d5-286fb2daa1a2","Type":"ContainerStarted","Data":"960a0e13a40f3d5fd4777ce6bd21d8155af32bf321ebca389e3d2f1852ff7c42"} Sep 30 17:56:04 crc kubenswrapper[4688]: I0930 17:56:04.429000 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:56:04 crc kubenswrapper[4688]: E0930 17:56:04.429626 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:56:16 crc kubenswrapper[4688]: I0930 17:56:16.429805 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:56:16 crc kubenswrapper[4688]: E0930 17:56:16.430840 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:56:30 crc kubenswrapper[4688]: I0930 17:56:30.428725 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:56:30 crc kubenswrapper[4688]: E0930 17:56:30.429393 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:56:40 crc kubenswrapper[4688]: I0930 17:56:40.945111 4688 generic.go:334] "Generic (PLEG): container finished" podID="a39b16b2-47a0-4014-a6d5-286fb2daa1a2" containerID="960a0e13a40f3d5fd4777ce6bd21d8155af32bf321ebca389e3d2f1852ff7c42" exitCode=0 Sep 30 17:56:40 crc kubenswrapper[4688]: I0930 17:56:40.945211 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" event={"ID":"a39b16b2-47a0-4014-a6d5-286fb2daa1a2","Type":"ContainerDied","Data":"960a0e13a40f3d5fd4777ce6bd21d8155af32bf321ebca389e3d2f1852ff7c42"} Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.342570 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.531157 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key\") pod \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.531533 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory\") pod \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.531687 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sms7x\" (UniqueName: \"kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x\") pod \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\" (UID: \"a39b16b2-47a0-4014-a6d5-286fb2daa1a2\") " Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.552958 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x" (OuterVolumeSpecName: "kube-api-access-sms7x") pod "a39b16b2-47a0-4014-a6d5-286fb2daa1a2" (UID: "a39b16b2-47a0-4014-a6d5-286fb2daa1a2"). InnerVolumeSpecName "kube-api-access-sms7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.558770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory" (OuterVolumeSpecName: "inventory") pod "a39b16b2-47a0-4014-a6d5-286fb2daa1a2" (UID: "a39b16b2-47a0-4014-a6d5-286fb2daa1a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.572154 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a39b16b2-47a0-4014-a6d5-286fb2daa1a2" (UID: "a39b16b2-47a0-4014-a6d5-286fb2daa1a2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.633282 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.633318 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sms7x\" (UniqueName: \"kubernetes.io/projected/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-kube-api-access-sms7x\") on node \"crc\" DevicePath \"\"" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.633328 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a39b16b2-47a0-4014-a6d5-286fb2daa1a2-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.966846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" event={"ID":"a39b16b2-47a0-4014-a6d5-286fb2daa1a2","Type":"ContainerDied","Data":"0b5a81d4b552c86e1e5f27751f358fc027653c628a5929df06b08249522d5878"} Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.966888 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b5a81d4b552c86e1e5f27751f358fc027653c628a5929df06b08249522d5878" Sep 30 17:56:42 crc kubenswrapper[4688]: I0930 17:56:42.966905 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-64mcs" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.127911 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq"] Sep 30 17:56:43 crc kubenswrapper[4688]: E0930 17:56:43.128267 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39b16b2-47a0-4014-a6d5-286fb2daa1a2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.128285 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39b16b2-47a0-4014-a6d5-286fb2daa1a2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.128494 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39b16b2-47a0-4014-a6d5-286fb2daa1a2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.129160 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.133249 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.133981 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.134321 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.136601 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.144373 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq"] Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.245045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.245249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8pv\" (UniqueName: \"kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.245415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.347045 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr8pv\" (UniqueName: \"kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.347127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.347230 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.353302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.360019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.375748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr8pv\" (UniqueName: \"kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qcftq\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.458098 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:56:43 crc kubenswrapper[4688]: I0930 17:56:43.466876 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.015740 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq"] Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.024274 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.430230 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:56:44 crc kubenswrapper[4688]: E0930 17:56:44.430469 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.436519 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.985946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" event={"ID":"03ed4997-f615-4267-9738-7cd2fd9a0cb9","Type":"ContainerStarted","Data":"32007b6022b5aa6c3ea156f418b673d891b86edc21c104b0c03a7085505218ae"} Sep 30 17:56:44 crc kubenswrapper[4688]: I0930 17:56:44.985997 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" event={"ID":"03ed4997-f615-4267-9738-7cd2fd9a0cb9","Type":"ContainerStarted","Data":"c0c21fb0109a1c2ec2b13443b49c8448cfdce2de9db0e143bdd3946d4cc07b50"} Sep 30 17:56:45 crc kubenswrapper[4688]: I0930 17:56:45.009154 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" podStartSLOduration=1.598525583 podStartE2EDuration="2.00913273s" podCreationTimestamp="2025-09-30 17:56:43 +0000 UTC" firstStartedPulling="2025-09-30 17:56:44.02402139 +0000 UTC m=+2461.319458948" lastFinishedPulling="2025-09-30 17:56:44.434628537 +0000 UTC m=+2461.730066095" observedRunningTime="2025-09-30 17:56:45.00217223 +0000 UTC m=+2462.297609798" watchObservedRunningTime="2025-09-30 17:56:45.00913273 +0000 UTC m=+2462.304570298" Sep 30 17:56:56 crc kubenswrapper[4688]: I0930 17:56:56.429017 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:56:56 crc kubenswrapper[4688]: E0930 17:56:56.429728 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:57:10 crc kubenswrapper[4688]: I0930 17:57:10.430029 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:57:10 crc kubenswrapper[4688]: E0930 17:57:10.430880 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:57:22 crc kubenswrapper[4688]: I0930 17:57:22.430408 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:57:22 crc kubenswrapper[4688]: E0930 17:57:22.431249 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 17:57:37 crc kubenswrapper[4688]: I0930 17:57:37.429530 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 17:57:38 crc kubenswrapper[4688]: I0930 17:57:38.487254 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577"} Sep 30 17:57:42 crc kubenswrapper[4688]: I0930 17:57:42.530343 4688 generic.go:334] "Generic (PLEG): container finished" podID="03ed4997-f615-4267-9738-7cd2fd9a0cb9" containerID="32007b6022b5aa6c3ea156f418b673d891b86edc21c104b0c03a7085505218ae" exitCode=2 Sep 30 17:57:42 crc kubenswrapper[4688]: I0930 17:57:42.530640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" event={"ID":"03ed4997-f615-4267-9738-7cd2fd9a0cb9","Type":"ContainerDied","Data":"32007b6022b5aa6c3ea156f418b673d891b86edc21c104b0c03a7085505218ae"} Sep 30 17:57:43 crc kubenswrapper[4688]: I0930 17:57:43.951732 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.042015 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key\") pod \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.042396 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr8pv\" (UniqueName: \"kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv\") pod \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.042666 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory\") pod \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\" (UID: \"03ed4997-f615-4267-9738-7cd2fd9a0cb9\") " Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.047345 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv" (OuterVolumeSpecName: "kube-api-access-gr8pv") pod "03ed4997-f615-4267-9738-7cd2fd9a0cb9" (UID: "03ed4997-f615-4267-9738-7cd2fd9a0cb9"). InnerVolumeSpecName "kube-api-access-gr8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.074539 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory" (OuterVolumeSpecName: "inventory") pod "03ed4997-f615-4267-9738-7cd2fd9a0cb9" (UID: "03ed4997-f615-4267-9738-7cd2fd9a0cb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.078289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03ed4997-f615-4267-9738-7cd2fd9a0cb9" (UID: "03ed4997-f615-4267-9738-7cd2fd9a0cb9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.144898 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.144949 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03ed4997-f615-4267-9738-7cd2fd9a0cb9-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.144963 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr8pv\" (UniqueName: \"kubernetes.io/projected/03ed4997-f615-4267-9738-7cd2fd9a0cb9-kube-api-access-gr8pv\") on node \"crc\" DevicePath \"\"" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.547991 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" event={"ID":"03ed4997-f615-4267-9738-7cd2fd9a0cb9","Type":"ContainerDied","Data":"c0c21fb0109a1c2ec2b13443b49c8448cfdce2de9db0e143bdd3946d4cc07b50"} Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.548668 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c21fb0109a1c2ec2b13443b49c8448cfdce2de9db0e143bdd3946d4cc07b50" Sep 30 17:57:44 crc kubenswrapper[4688]: I0930 17:57:44.548071 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qcftq" Sep 30 17:57:48 crc kubenswrapper[4688]: I0930 17:57:48.684895 4688 scope.go:117] "RemoveContainer" containerID="125966bbebd9eefb2796e0b22d44bf3fd9cde2151f28f101b4f397224d5e0421" Sep 30 17:57:48 crc kubenswrapper[4688]: I0930 17:57:48.727155 4688 scope.go:117] "RemoveContainer" containerID="f7ee728c1e6aa500f552ec3d620ae657222667417dbfc3095cafd22a2081d166" Sep 30 17:57:48 crc kubenswrapper[4688]: I0930 17:57:48.781047 4688 scope.go:117] "RemoveContainer" containerID="907abb59bcfd7bf2707daf2d63077b7f6adb8249844c1d28ae83655c7c053270" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.042100 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr"] Sep 30 17:57:52 crc kubenswrapper[4688]: E0930 17:57:52.042988 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ed4997-f615-4267-9738-7cd2fd9a0cb9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.043015 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ed4997-f615-4267-9738-7cd2fd9a0cb9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.043328 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ed4997-f615-4267-9738-7cd2fd9a0cb9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.044317 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.049330 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.049619 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.049859 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.052844 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.056017 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr"] Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.104629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.105115 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.105215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgb67\" (UniqueName: \"kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.206773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgb67\" (UniqueName: \"kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.206845 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.206956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.212742 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.214683 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.222404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgb67\" (UniqueName: \"kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f64rr\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.378701 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:57:52 crc kubenswrapper[4688]: I0930 17:57:52.903218 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr"] Sep 30 17:57:53 crc kubenswrapper[4688]: I0930 17:57:53.627312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" event={"ID":"f8b375f4-25b5-42f4-9b62-763fd31e3404","Type":"ContainerStarted","Data":"044b883db5b2898ef272390dde04883d6d35880442060d60fa6711d582b91b89"} Sep 30 17:57:53 crc kubenswrapper[4688]: I0930 17:57:53.627578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" event={"ID":"f8b375f4-25b5-42f4-9b62-763fd31e3404","Type":"ContainerStarted","Data":"0224b4fdd449c8cbb99190e292c24ecd12b3039fd7df77ed3cfb6dcea79fad0e"} Sep 30 17:57:53 crc kubenswrapper[4688]: I0930 17:57:53.655732 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" podStartSLOduration=1.165938304 podStartE2EDuration="1.655709812s" podCreationTimestamp="2025-09-30 17:57:52 +0000 UTC" firstStartedPulling="2025-09-30 17:57:52.913165825 +0000 UTC m=+2530.208603383" lastFinishedPulling="2025-09-30 17:57:53.402937323 +0000 UTC m=+2530.698374891" observedRunningTime="2025-09-30 17:57:53.646074012 +0000 UTC m=+2530.941511590" watchObservedRunningTime="2025-09-30 17:57:53.655709812 +0000 UTC m=+2530.951147380" Sep 30 17:58:42 crc kubenswrapper[4688]: I0930 17:58:42.049446 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8b375f4-25b5-42f4-9b62-763fd31e3404" containerID="044b883db5b2898ef272390dde04883d6d35880442060d60fa6711d582b91b89" exitCode=0 Sep 30 17:58:42 crc kubenswrapper[4688]: I0930 17:58:42.049525 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" event={"ID":"f8b375f4-25b5-42f4-9b62-763fd31e3404","Type":"ContainerDied","Data":"044b883db5b2898ef272390dde04883d6d35880442060d60fa6711d582b91b89"} Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.517256 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.645292 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgb67\" (UniqueName: \"kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67\") pod \"f8b375f4-25b5-42f4-9b62-763fd31e3404\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.645409 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory\") pod \"f8b375f4-25b5-42f4-9b62-763fd31e3404\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.645435 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key\") pod \"f8b375f4-25b5-42f4-9b62-763fd31e3404\" (UID: \"f8b375f4-25b5-42f4-9b62-763fd31e3404\") " Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.653557 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67" (OuterVolumeSpecName: "kube-api-access-wgb67") pod "f8b375f4-25b5-42f4-9b62-763fd31e3404" (UID: "f8b375f4-25b5-42f4-9b62-763fd31e3404"). InnerVolumeSpecName "kube-api-access-wgb67". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.677406 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f8b375f4-25b5-42f4-9b62-763fd31e3404" (UID: "f8b375f4-25b5-42f4-9b62-763fd31e3404"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.684346 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory" (OuterVolumeSpecName: "inventory") pod "f8b375f4-25b5-42f4-9b62-763fd31e3404" (UID: "f8b375f4-25b5-42f4-9b62-763fd31e3404"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.747304 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgb67\" (UniqueName: \"kubernetes.io/projected/f8b375f4-25b5-42f4-9b62-763fd31e3404-kube-api-access-wgb67\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.747360 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:43 crc kubenswrapper[4688]: I0930 17:58:43.747377 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8b375f4-25b5-42f4-9b62-763fd31e3404-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.070585 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" event={"ID":"f8b375f4-25b5-42f4-9b62-763fd31e3404","Type":"ContainerDied","Data":"0224b4fdd449c8cbb99190e292c24ecd12b3039fd7df77ed3cfb6dcea79fad0e"} Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.070637 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0224b4fdd449c8cbb99190e292c24ecd12b3039fd7df77ed3cfb6dcea79fad0e" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.070649 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f64rr" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.155432 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lb96p"] Sep 30 17:58:44 crc kubenswrapper[4688]: E0930 17:58:44.155832 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b375f4-25b5-42f4-9b62-763fd31e3404" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.155850 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b375f4-25b5-42f4-9b62-763fd31e3404" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.156059 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b375f4-25b5-42f4-9b62-763fd31e3404" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.156829 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.159428 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.159949 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.160140 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.160822 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.180403 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lb96p"] Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.256869 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.256975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.257017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vm5g\" (UniqueName: \"kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.358439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.358581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.358616 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vm5g\" (UniqueName: \"kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.366326 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.370189 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.375402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vm5g\" (UniqueName: \"kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g\") pod \"ssh-known-hosts-edpm-deployment-lb96p\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:44 crc kubenswrapper[4688]: I0930 17:58:44.479212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:45 crc kubenswrapper[4688]: I0930 17:58:45.005735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lb96p"] Sep 30 17:58:45 crc kubenswrapper[4688]: I0930 17:58:45.080220 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" event={"ID":"386f214d-f18b-475f-828b-9fb63a356714","Type":"ContainerStarted","Data":"010fd4d2a30a140d4c7f441aecf4bad658824e4bfe8119c337f72df4110282da"} Sep 30 17:58:46 crc kubenswrapper[4688]: I0930 17:58:46.088996 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" event={"ID":"386f214d-f18b-475f-828b-9fb63a356714","Type":"ContainerStarted","Data":"650ae2fab9aa089d826c90d30d54b3b37475884e0bf1dff23ca3d9ac7dfbec95"} Sep 30 17:58:46 crc kubenswrapper[4688]: I0930 17:58:46.112994 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" podStartSLOduration=1.659455349 podStartE2EDuration="2.112968629s" podCreationTimestamp="2025-09-30 17:58:44 +0000 UTC" firstStartedPulling="2025-09-30 17:58:45.01028984 +0000 UTC m=+2582.305727398" lastFinishedPulling="2025-09-30 17:58:45.46380312 +0000 UTC m=+2582.759240678" observedRunningTime="2025-09-30 17:58:46.106183643 +0000 UTC m=+2583.401621241" watchObservedRunningTime="2025-09-30 17:58:46.112968629 +0000 UTC m=+2583.408406197" Sep 30 17:58:53 crc kubenswrapper[4688]: I0930 17:58:53.163061 4688 generic.go:334] "Generic (PLEG): container finished" podID="386f214d-f18b-475f-828b-9fb63a356714" containerID="650ae2fab9aa089d826c90d30d54b3b37475884e0bf1dff23ca3d9ac7dfbec95" exitCode=0 Sep 30 17:58:53 crc kubenswrapper[4688]: I0930 17:58:53.163497 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" event={"ID":"386f214d-f18b-475f-828b-9fb63a356714","Type":"ContainerDied","Data":"650ae2fab9aa089d826c90d30d54b3b37475884e0bf1dff23ca3d9ac7dfbec95"} Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.588690 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.666259 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam\") pod \"386f214d-f18b-475f-828b-9fb63a356714\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.666465 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vm5g\" (UniqueName: \"kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g\") pod \"386f214d-f18b-475f-828b-9fb63a356714\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.666601 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0\") pod \"386f214d-f18b-475f-828b-9fb63a356714\" (UID: \"386f214d-f18b-475f-828b-9fb63a356714\") " Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.672112 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g" (OuterVolumeSpecName: "kube-api-access-6vm5g") pod "386f214d-f18b-475f-828b-9fb63a356714" (UID: "386f214d-f18b-475f-828b-9fb63a356714"). InnerVolumeSpecName "kube-api-access-6vm5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.694502 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "386f214d-f18b-475f-828b-9fb63a356714" (UID: "386f214d-f18b-475f-828b-9fb63a356714"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.705909 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "386f214d-f18b-475f-828b-9fb63a356714" (UID: "386f214d-f18b-475f-828b-9fb63a356714"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.768474 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vm5g\" (UniqueName: \"kubernetes.io/projected/386f214d-f18b-475f-828b-9fb63a356714-kube-api-access-6vm5g\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.768508 4688 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-inventory-0\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:54 crc kubenswrapper[4688]: I0930 17:58:54.768518 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/386f214d-f18b-475f-828b-9fb63a356714-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.178159 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" event={"ID":"386f214d-f18b-475f-828b-9fb63a356714","Type":"ContainerDied","Data":"010fd4d2a30a140d4c7f441aecf4bad658824e4bfe8119c337f72df4110282da"} Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.178203 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="010fd4d2a30a140d4c7f441aecf4bad658824e4bfe8119c337f72df4110282da" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.178198 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lb96p" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.245314 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv"] Sep 30 17:58:55 crc kubenswrapper[4688]: E0930 17:58:55.245782 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386f214d-f18b-475f-828b-9fb63a356714" containerName="ssh-known-hosts-edpm-deployment" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.245799 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="386f214d-f18b-475f-828b-9fb63a356714" containerName="ssh-known-hosts-edpm-deployment" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.245992 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="386f214d-f18b-475f-828b-9fb63a356714" containerName="ssh-known-hosts-edpm-deployment" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.246638 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.249053 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.251529 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.252720 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.254638 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.263090 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv"] Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.381361 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.381472 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.381537 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9n96\" (UniqueName: \"kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.483633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.483747 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.483794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9n96\" (UniqueName: \"kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.489092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.489602 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.502790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9n96\" (UniqueName: \"kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-66ftv\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:55 crc kubenswrapper[4688]: I0930 17:58:55.562066 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:58:56 crc kubenswrapper[4688]: I0930 17:58:56.147240 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv"] Sep 30 17:58:56 crc kubenswrapper[4688]: W0930 17:58:56.153650 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod857e37c3_03d7_434d_80d4_6ee03574499e.slice/crio-9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02 WatchSource:0}: Error finding container 9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02: Status 404 returned error can't find the container with id 9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02 Sep 30 17:58:56 crc kubenswrapper[4688]: I0930 17:58:56.188526 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" event={"ID":"857e37c3-03d7-434d-80d4-6ee03574499e","Type":"ContainerStarted","Data":"9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02"} Sep 30 17:58:57 crc kubenswrapper[4688]: I0930 17:58:57.198401 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" event={"ID":"857e37c3-03d7-434d-80d4-6ee03574499e","Type":"ContainerStarted","Data":"0e0f549283cef513c9809202eb96970dea0c4ff36c2490518080b1a4f600c801"} Sep 30 17:58:57 crc kubenswrapper[4688]: I0930 17:58:57.214134 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" podStartSLOduration=1.713020161 podStartE2EDuration="2.214111073s" podCreationTimestamp="2025-09-30 17:58:55 +0000 UTC" firstStartedPulling="2025-09-30 17:58:56.159432868 +0000 UTC m=+2593.454870426" lastFinishedPulling="2025-09-30 17:58:56.66052378 +0000 UTC m=+2593.955961338" observedRunningTime="2025-09-30 17:58:57.211452724 +0000 UTC m=+2594.506890292" watchObservedRunningTime="2025-09-30 17:58:57.214111073 +0000 UTC m=+2594.509548631" Sep 30 17:59:05 crc kubenswrapper[4688]: I0930 17:59:05.264601 4688 generic.go:334] "Generic (PLEG): container finished" podID="857e37c3-03d7-434d-80d4-6ee03574499e" containerID="0e0f549283cef513c9809202eb96970dea0c4ff36c2490518080b1a4f600c801" exitCode=0 Sep 30 17:59:05 crc kubenswrapper[4688]: I0930 17:59:05.264694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" event={"ID":"857e37c3-03d7-434d-80d4-6ee03574499e","Type":"ContainerDied","Data":"0e0f549283cef513c9809202eb96970dea0c4ff36c2490518080b1a4f600c801"} Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.650801 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.796745 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key\") pod \"857e37c3-03d7-434d-80d4-6ee03574499e\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.796908 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9n96\" (UniqueName: \"kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96\") pod \"857e37c3-03d7-434d-80d4-6ee03574499e\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.796935 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory\") pod \"857e37c3-03d7-434d-80d4-6ee03574499e\" (UID: \"857e37c3-03d7-434d-80d4-6ee03574499e\") " Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.802718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96" (OuterVolumeSpecName: "kube-api-access-c9n96") pod "857e37c3-03d7-434d-80d4-6ee03574499e" (UID: "857e37c3-03d7-434d-80d4-6ee03574499e"). InnerVolumeSpecName "kube-api-access-c9n96". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.841279 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory" (OuterVolumeSpecName: "inventory") pod "857e37c3-03d7-434d-80d4-6ee03574499e" (UID: "857e37c3-03d7-434d-80d4-6ee03574499e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.841326 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "857e37c3-03d7-434d-80d4-6ee03574499e" (UID: "857e37c3-03d7-434d-80d4-6ee03574499e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.899351 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9n96\" (UniqueName: \"kubernetes.io/projected/857e37c3-03d7-434d-80d4-6ee03574499e-kube-api-access-c9n96\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.899395 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:06 crc kubenswrapper[4688]: I0930 17:59:06.899406 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/857e37c3-03d7-434d-80d4-6ee03574499e-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.285725 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" event={"ID":"857e37c3-03d7-434d-80d4-6ee03574499e","Type":"ContainerDied","Data":"9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02"} Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.285774 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d09ef09692abe3ac053d10c1036d4184d4404a999e3e56db3e64fbee518ec02" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.285835 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-66ftv" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.372745 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45"] Sep 30 17:59:07 crc kubenswrapper[4688]: E0930 17:59:07.373418 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857e37c3-03d7-434d-80d4-6ee03574499e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.373551 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="857e37c3-03d7-434d-80d4-6ee03574499e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.374078 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="857e37c3-03d7-434d-80d4-6ee03574499e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.375509 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.377888 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.378188 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.378322 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.380085 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.382741 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45"] Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.509307 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.509367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.509637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klf7c\" (UniqueName: \"kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.612118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.612204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.612311 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klf7c\" (UniqueName: \"kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.616045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.616729 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.632105 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klf7c\" (UniqueName: \"kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:07 crc kubenswrapper[4688]: I0930 17:59:07.693443 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:08 crc kubenswrapper[4688]: I0930 17:59:08.223107 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45"] Sep 30 17:59:08 crc kubenswrapper[4688]: I0930 17:59:08.294008 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" event={"ID":"b95ac163-c6b0-4fa0-a04c-5d4e0389c238","Type":"ContainerStarted","Data":"e3100c1488987b2d93c7e4a23f8a58d52be62f68888059cfd5896ce713be9e5c"} Sep 30 17:59:09 crc kubenswrapper[4688]: I0930 17:59:09.302350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" event={"ID":"b95ac163-c6b0-4fa0-a04c-5d4e0389c238","Type":"ContainerStarted","Data":"88aefb7f1ca213c8dae50ac35e8796fee6f39e2d4b489f351be87fd1b6e67e33"} Sep 30 17:59:09 crc kubenswrapper[4688]: I0930 17:59:09.326756 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" podStartSLOduration=1.858166174 podStartE2EDuration="2.326734133s" podCreationTimestamp="2025-09-30 17:59:07 +0000 UTC" firstStartedPulling="2025-09-30 17:59:08.228188862 +0000 UTC m=+2605.523626420" lastFinishedPulling="2025-09-30 17:59:08.696756821 +0000 UTC m=+2605.992194379" observedRunningTime="2025-09-30 17:59:09.321048276 +0000 UTC m=+2606.616485844" watchObservedRunningTime="2025-09-30 17:59:09.326734133 +0000 UTC m=+2606.622171691" Sep 30 17:59:18 crc kubenswrapper[4688]: I0930 17:59:18.381186 4688 generic.go:334] "Generic (PLEG): container finished" podID="b95ac163-c6b0-4fa0-a04c-5d4e0389c238" containerID="88aefb7f1ca213c8dae50ac35e8796fee6f39e2d4b489f351be87fd1b6e67e33" exitCode=0 Sep 30 17:59:18 crc kubenswrapper[4688]: I0930 17:59:18.381299 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" event={"ID":"b95ac163-c6b0-4fa0-a04c-5d4e0389c238","Type":"ContainerDied","Data":"88aefb7f1ca213c8dae50ac35e8796fee6f39e2d4b489f351be87fd1b6e67e33"} Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.772816 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.843011 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory\") pod \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.843173 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key\") pod \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.843269 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klf7c\" (UniqueName: \"kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c\") pod \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\" (UID: \"b95ac163-c6b0-4fa0-a04c-5d4e0389c238\") " Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.866925 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c" (OuterVolumeSpecName: "kube-api-access-klf7c") pod "b95ac163-c6b0-4fa0-a04c-5d4e0389c238" (UID: "b95ac163-c6b0-4fa0-a04c-5d4e0389c238"). InnerVolumeSpecName "kube-api-access-klf7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.877602 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory" (OuterVolumeSpecName: "inventory") pod "b95ac163-c6b0-4fa0-a04c-5d4e0389c238" (UID: "b95ac163-c6b0-4fa0-a04c-5d4e0389c238"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.893418 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b95ac163-c6b0-4fa0-a04c-5d4e0389c238" (UID: "b95ac163-c6b0-4fa0-a04c-5d4e0389c238"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.946382 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.946417 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klf7c\" (UniqueName: \"kubernetes.io/projected/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-kube-api-access-klf7c\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:19 crc kubenswrapper[4688]: I0930 17:59:19.946430 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b95ac163-c6b0-4fa0-a04c-5d4e0389c238-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.403622 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" event={"ID":"b95ac163-c6b0-4fa0-a04c-5d4e0389c238","Type":"ContainerDied","Data":"e3100c1488987b2d93c7e4a23f8a58d52be62f68888059cfd5896ce713be9e5c"} Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.403681 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.403697 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3100c1488987b2d93c7e4a23f8a58d52be62f68888059cfd5896ce713be9e5c" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.488311 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch"] Sep 30 17:59:20 crc kubenswrapper[4688]: E0930 17:59:20.488714 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95ac163-c6b0-4fa0-a04c-5d4e0389c238" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.488732 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95ac163-c6b0-4fa0-a04c-5d4e0389c238" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.488917 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95ac163-c6b0-4fa0-a04c-5d4e0389c238" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.489545 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492332 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492543 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492549 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492966 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492356 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492434 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.492436 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.494384 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.511051 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch"] Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.556996 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzff\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557145 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557202 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557276 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557350 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557383 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557425 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557472 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.557499 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659388 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659441 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659513 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659532 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659631 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzff\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659685 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659703 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659775 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.659805 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.665306 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.665764 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.665856 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.665942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.666748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.666934 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.667224 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.668097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.668627 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.668792 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.669350 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.672795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.673317 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.675086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzff\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d7wch\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:20 crc kubenswrapper[4688]: I0930 17:59:20.811106 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 17:59:21 crc kubenswrapper[4688]: I0930 17:59:21.389370 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch"] Sep 30 17:59:21 crc kubenswrapper[4688]: W0930 17:59:21.397809 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf29e19_4d73_430a_937d_2b2b02ef6d70.slice/crio-681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8 WatchSource:0}: Error finding container 681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8: Status 404 returned error can't find the container with id 681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8 Sep 30 17:59:21 crc kubenswrapper[4688]: I0930 17:59:21.413125 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" event={"ID":"faf29e19-4d73-430a-937d-2b2b02ef6d70","Type":"ContainerStarted","Data":"681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8"} Sep 30 17:59:22 crc kubenswrapper[4688]: I0930 17:59:22.427707 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" event={"ID":"faf29e19-4d73-430a-937d-2b2b02ef6d70","Type":"ContainerStarted","Data":"16cc7963e83e98c1b50806ec92ee0559703f85e4418b68652dc3bc71c0c5990c"} Sep 30 17:59:22 crc kubenswrapper[4688]: I0930 17:59:22.461437 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" podStartSLOduration=2.039894625 podStartE2EDuration="2.461413816s" podCreationTimestamp="2025-09-30 17:59:20 +0000 UTC" firstStartedPulling="2025-09-30 17:59:21.400822078 +0000 UTC m=+2618.696259636" lastFinishedPulling="2025-09-30 17:59:21.822341269 +0000 UTC m=+2619.117778827" observedRunningTime="2025-09-30 17:59:22.456881479 +0000 UTC m=+2619.752319067" watchObservedRunningTime="2025-09-30 17:59:22.461413816 +0000 UTC m=+2619.756851404" Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.850838 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.856061 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.874832 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.933399 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.933482 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:29 crc kubenswrapper[4688]: I0930 17:59:29.933511 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqj84\" (UniqueName: \"kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.036113 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.036179 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqj84\" (UniqueName: \"kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.036393 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.036769 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.037036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.057147 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqj84\" (UniqueName: \"kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84\") pod \"redhat-operators-jr5x5\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.176646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:30 crc kubenswrapper[4688]: I0930 17:59:30.638386 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:31 crc kubenswrapper[4688]: I0930 17:59:31.509826 4688 generic.go:334] "Generic (PLEG): container finished" podID="f46d5667-8210-4500-878e-deeaa13e7d15" containerID="393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641" exitCode=0 Sep 30 17:59:31 crc kubenswrapper[4688]: I0930 17:59:31.510044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerDied","Data":"393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641"} Sep 30 17:59:31 crc kubenswrapper[4688]: I0930 17:59:31.512695 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerStarted","Data":"9f6a8907ff1d35cd37cf5499d95abcc3bb29f9ae753763977b061d41584d091f"} Sep 30 17:59:33 crc kubenswrapper[4688]: I0930 17:59:33.529749 4688 generic.go:334] "Generic (PLEG): container finished" podID="f46d5667-8210-4500-878e-deeaa13e7d15" containerID="71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80" exitCode=0 Sep 30 17:59:33 crc kubenswrapper[4688]: I0930 17:59:33.529863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerDied","Data":"71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80"} Sep 30 17:59:34 crc kubenswrapper[4688]: I0930 17:59:34.541971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerStarted","Data":"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab"} Sep 30 17:59:34 crc kubenswrapper[4688]: I0930 17:59:34.568331 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jr5x5" podStartSLOduration=3.084184739 podStartE2EDuration="5.568312188s" podCreationTimestamp="2025-09-30 17:59:29 +0000 UTC" firstStartedPulling="2025-09-30 17:59:31.511827149 +0000 UTC m=+2628.807264727" lastFinishedPulling="2025-09-30 17:59:33.995954618 +0000 UTC m=+2631.291392176" observedRunningTime="2025-09-30 17:59:34.558556945 +0000 UTC m=+2631.853994533" watchObservedRunningTime="2025-09-30 17:59:34.568312188 +0000 UTC m=+2631.863749766" Sep 30 17:59:40 crc kubenswrapper[4688]: I0930 17:59:40.177432 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:40 crc kubenswrapper[4688]: I0930 17:59:40.178363 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:40 crc kubenswrapper[4688]: I0930 17:59:40.244225 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:40 crc kubenswrapper[4688]: I0930 17:59:40.641768 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:40 crc kubenswrapper[4688]: I0930 17:59:40.694180 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:42 crc kubenswrapper[4688]: I0930 17:59:42.609607 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jr5x5" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="registry-server" containerID="cri-o://05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab" gracePeriod=2 Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.085092 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.206100 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content\") pod \"f46d5667-8210-4500-878e-deeaa13e7d15\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.206665 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities\") pod \"f46d5667-8210-4500-878e-deeaa13e7d15\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.206754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqj84\" (UniqueName: \"kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84\") pod \"f46d5667-8210-4500-878e-deeaa13e7d15\" (UID: \"f46d5667-8210-4500-878e-deeaa13e7d15\") " Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.207367 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities" (OuterVolumeSpecName: "utilities") pod "f46d5667-8210-4500-878e-deeaa13e7d15" (UID: "f46d5667-8210-4500-878e-deeaa13e7d15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.250986 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84" (OuterVolumeSpecName: "kube-api-access-pqj84") pod "f46d5667-8210-4500-878e-deeaa13e7d15" (UID: "f46d5667-8210-4500-878e-deeaa13e7d15"). InnerVolumeSpecName "kube-api-access-pqj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.289993 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f46d5667-8210-4500-878e-deeaa13e7d15" (UID: "f46d5667-8210-4500-878e-deeaa13e7d15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.308780 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqj84\" (UniqueName: \"kubernetes.io/projected/f46d5667-8210-4500-878e-deeaa13e7d15-kube-api-access-pqj84\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.308823 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.308831 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f46d5667-8210-4500-878e-deeaa13e7d15-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.621430 4688 generic.go:334] "Generic (PLEG): container finished" podID="f46d5667-8210-4500-878e-deeaa13e7d15" containerID="05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab" exitCode=0 Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.621488 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerDied","Data":"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab"} Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.621521 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr5x5" event={"ID":"f46d5667-8210-4500-878e-deeaa13e7d15","Type":"ContainerDied","Data":"9f6a8907ff1d35cd37cf5499d95abcc3bb29f9ae753763977b061d41584d091f"} Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.621543 4688 scope.go:117] "RemoveContainer" containerID="05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.621633 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr5x5" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.652854 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.653503 4688 scope.go:117] "RemoveContainer" containerID="71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.660456 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jr5x5"] Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.677628 4688 scope.go:117] "RemoveContainer" containerID="393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.726288 4688 scope.go:117] "RemoveContainer" containerID="05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab" Sep 30 17:59:43 crc kubenswrapper[4688]: E0930 17:59:43.726992 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab\": container with ID starting with 05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab not found: ID does not exist" containerID="05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.727094 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab"} err="failed to get container status \"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab\": rpc error: code = NotFound desc = could not find container \"05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab\": container with ID starting with 05e652c7886a86ba2e618f1e0a9a90db378f65beebc241aeb8bdaf5e4dfba5ab not found: ID does not exist" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.727139 4688 scope.go:117] "RemoveContainer" containerID="71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80" Sep 30 17:59:43 crc kubenswrapper[4688]: E0930 17:59:43.727734 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80\": container with ID starting with 71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80 not found: ID does not exist" containerID="71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.727800 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80"} err="failed to get container status \"71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80\": rpc error: code = NotFound desc = could not find container \"71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80\": container with ID starting with 71e2b500b78c97efdbddbabbbaf9b1c01cecf56ec1e51e9ae24a22e4d44a7d80 not found: ID does not exist" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.727844 4688 scope.go:117] "RemoveContainer" containerID="393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641" Sep 30 17:59:43 crc kubenswrapper[4688]: E0930 17:59:43.728407 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641\": container with ID starting with 393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641 not found: ID does not exist" containerID="393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641" Sep 30 17:59:43 crc kubenswrapper[4688]: I0930 17:59:43.728471 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641"} err="failed to get container status \"393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641\": rpc error: code = NotFound desc = could not find container \"393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641\": container with ID starting with 393ee195db0ac513e08c281d7e5211ae0b80cdcbb0eb5d869e62a30822f46641 not found: ID does not exist" Sep 30 17:59:45 crc kubenswrapper[4688]: I0930 17:59:45.441784 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" path="/var/lib/kubelet/pods/f46d5667-8210-4500-878e-deeaa13e7d15/volumes" Sep 30 17:59:52 crc kubenswrapper[4688]: I0930 17:59:52.545366 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 17:59:52 crc kubenswrapper[4688]: I0930 17:59:52.545807 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.148251 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw"] Sep 30 18:00:00 crc kubenswrapper[4688]: E0930 18:00:00.149171 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="extract-content" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.149183 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="extract-content" Sep 30 18:00:00 crc kubenswrapper[4688]: E0930 18:00:00.149193 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="registry-server" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.149199 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="registry-server" Sep 30 18:00:00 crc kubenswrapper[4688]: E0930 18:00:00.149226 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="extract-utilities" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.149232 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="extract-utilities" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.150358 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46d5667-8210-4500-878e-deeaa13e7d15" containerName="registry-server" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.158407 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.166732 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.166756 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.182051 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw"] Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.340792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.341870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcwd\" (UniqueName: \"kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.341935 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.444965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcwd\" (UniqueName: \"kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.445022 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.445168 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.448087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.458744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.468491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcwd\" (UniqueName: \"kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd\") pod \"collect-profiles-29320920-nf2jw\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.483074 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:00 crc kubenswrapper[4688]: I0930 18:00:00.921074 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw"] Sep 30 18:00:01 crc kubenswrapper[4688]: I0930 18:00:01.783091 4688 generic.go:334] "Generic (PLEG): container finished" podID="faf29e19-4d73-430a-937d-2b2b02ef6d70" containerID="16cc7963e83e98c1b50806ec92ee0559703f85e4418b68652dc3bc71c0c5990c" exitCode=0 Sep 30 18:00:01 crc kubenswrapper[4688]: I0930 18:00:01.783190 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" event={"ID":"faf29e19-4d73-430a-937d-2b2b02ef6d70","Type":"ContainerDied","Data":"16cc7963e83e98c1b50806ec92ee0559703f85e4418b68652dc3bc71c0c5990c"} Sep 30 18:00:01 crc kubenswrapper[4688]: I0930 18:00:01.787122 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f907ce9-693b-49d7-9505-295b23e4e208" containerID="89db9c80730a15b937dce7c0d68d1ea437cfc0124c7e97d7bf83f2df7ff2838f" exitCode=0 Sep 30 18:00:01 crc kubenswrapper[4688]: I0930 18:00:01.787170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" event={"ID":"7f907ce9-693b-49d7-9505-295b23e4e208","Type":"ContainerDied","Data":"89db9c80730a15b937dce7c0d68d1ea437cfc0124c7e97d7bf83f2df7ff2838f"} Sep 30 18:00:01 crc kubenswrapper[4688]: I0930 18:00:01.787198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" event={"ID":"7f907ce9-693b-49d7-9505-295b23e4e208","Type":"ContainerStarted","Data":"c86607e07b4b5c3fe302ff1a1474a8bbba22997aa2fff1726aee0379b74dae13"} Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.122386 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.226714 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.294846 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcwd\" (UniqueName: \"kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd\") pod \"7f907ce9-693b-49d7-9505-295b23e4e208\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295773 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295842 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295862 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295890 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295911 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume\") pod \"7f907ce9-693b-49d7-9505-295b23e4e208\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295940 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295955 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.295977 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.296009 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzff\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.296030 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume\") pod \"7f907ce9-693b-49d7-9505-295b23e4e208\" (UID: \"7f907ce9-693b-49d7-9505-295b23e4e208\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.296543 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f907ce9-693b-49d7-9505-295b23e4e208" (UID: "7f907ce9-693b-49d7-9505-295b23e4e208"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297178 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297203 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297224 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297248 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.297280 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle\") pod \"faf29e19-4d73-430a-937d-2b2b02ef6d70\" (UID: \"faf29e19-4d73-430a-937d-2b2b02ef6d70\") " Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.302871 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.303274 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f907ce9-693b-49d7-9505-295b23e4e208-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.303304 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.303410 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f907ce9-693b-49d7-9505-295b23e4e208" (UID: "7f907ce9-693b-49d7-9505-295b23e4e208"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.304488 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.305463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.305479 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.305644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.305707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.305743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.306464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.307276 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff" (OuterVolumeSpecName: "kube-api-access-krzff") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "kube-api-access-krzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.322992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.323747 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.331424 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.333620 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd" (OuterVolumeSpecName: "kube-api-access-kwcwd") pod "7f907ce9-693b-49d7-9505-295b23e4e208" (UID: "7f907ce9-693b-49d7-9505-295b23e4e208"). InnerVolumeSpecName "kube-api-access-kwcwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.343675 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory" (OuterVolumeSpecName: "inventory") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.344688 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "faf29e19-4d73-430a-937d-2b2b02ef6d70" (UID: "faf29e19-4d73-430a-937d-2b2b02ef6d70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.404961 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405242 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcwd\" (UniqueName: \"kubernetes.io/projected/7f907ce9-693b-49d7-9505-295b23e4e208-kube-api-access-kwcwd\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405252 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405263 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405273 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405283 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405292 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405302 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405314 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzff\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-kube-api-access-krzff\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405323 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f907ce9-693b-49d7-9505-295b23e4e208-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405331 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405339 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405346 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405354 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf29e19-4d73-430a-937d-2b2b02ef6d70-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.405362 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/faf29e19-4d73-430a-937d-2b2b02ef6d70-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.810332 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" event={"ID":"faf29e19-4d73-430a-937d-2b2b02ef6d70","Type":"ContainerDied","Data":"681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8"} Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.810415 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681c722b7f3f8a25613a9acaf393ce8ff2ff67aa82088b21b921b136e2f9c7e8" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.810359 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d7wch" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.815315 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" event={"ID":"7f907ce9-693b-49d7-9505-295b23e4e208","Type":"ContainerDied","Data":"c86607e07b4b5c3fe302ff1a1474a8bbba22997aa2fff1726aee0379b74dae13"} Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.815358 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86607e07b4b5c3fe302ff1a1474a8bbba22997aa2fff1726aee0379b74dae13" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.815413 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.891627 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26"] Sep 30 18:00:03 crc kubenswrapper[4688]: E0930 18:00:03.892298 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f907ce9-693b-49d7-9505-295b23e4e208" containerName="collect-profiles" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.892434 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f907ce9-693b-49d7-9505-295b23e4e208" containerName="collect-profiles" Sep 30 18:00:03 crc kubenswrapper[4688]: E0930 18:00:03.892539 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf29e19-4d73-430a-937d-2b2b02ef6d70" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.892659 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf29e19-4d73-430a-937d-2b2b02ef6d70" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.893021 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f907ce9-693b-49d7-9505-295b23e4e208" containerName="collect-profiles" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.893143 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf29e19-4d73-430a-937d-2b2b02ef6d70" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.893951 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.897234 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.897556 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.897805 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.897961 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.898358 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.905247 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26"] Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.920136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nv2x\" (UniqueName: \"kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.920243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.920424 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.920472 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:03 crc kubenswrapper[4688]: I0930 18:00:03.920723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.022831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.022986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nv2x\" (UniqueName: \"kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.023024 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.023076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.023098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.023930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.028513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.029190 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.029340 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.046817 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nv2x\" (UniqueName: \"kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjw26\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.195983 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd"] Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.208971 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320875-n5ljd"] Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.220245 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.732914 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26"] Sep 30 18:00:04 crc kubenswrapper[4688]: W0930 18:00:04.736562 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01be6ded_891d_4d74_9af3_2adcc0d9ebb4.slice/crio-d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6 WatchSource:0}: Error finding container d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6: Status 404 returned error can't find the container with id d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6 Sep 30 18:00:04 crc kubenswrapper[4688]: I0930 18:00:04.823387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" event={"ID":"01be6ded-891d-4d74-9af3-2adcc0d9ebb4","Type":"ContainerStarted","Data":"d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6"} Sep 30 18:00:05 crc kubenswrapper[4688]: I0930 18:00:05.447556 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1537ab-4a36-4cae-aa5c-5035d1705936" path="/var/lib/kubelet/pods/bf1537ab-4a36-4cae-aa5c-5035d1705936/volumes" Sep 30 18:00:06 crc kubenswrapper[4688]: I0930 18:00:06.839776 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" event={"ID":"01be6ded-891d-4d74-9af3-2adcc0d9ebb4","Type":"ContainerStarted","Data":"2433699038bc71fb4a667a759b98dc3aa90a0bdbfc8ff030e7aa79ade929126d"} Sep 30 18:00:06 crc kubenswrapper[4688]: I0930 18:00:06.862148 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" podStartSLOduration=2.761999173 podStartE2EDuration="3.862114535s" podCreationTimestamp="2025-09-30 18:00:03 +0000 UTC" firstStartedPulling="2025-09-30 18:00:04.739376108 +0000 UTC m=+2662.034813666" lastFinishedPulling="2025-09-30 18:00:05.83949147 +0000 UTC m=+2663.134929028" observedRunningTime="2025-09-30 18:00:06.8565339 +0000 UTC m=+2664.151971458" watchObservedRunningTime="2025-09-30 18:00:06.862114535 +0000 UTC m=+2664.157552143" Sep 30 18:00:22 crc kubenswrapper[4688]: I0930 18:00:22.545649 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:00:22 crc kubenswrapper[4688]: I0930 18:00:22.546247 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:00:48 crc kubenswrapper[4688]: I0930 18:00:48.920063 4688 scope.go:117] "RemoveContainer" containerID="3a18192b3b71751799f0e9a888503a11e9f281465a62e0a91e5dfaa536179af6" Sep 30 18:00:52 crc kubenswrapper[4688]: I0930 18:00:52.545742 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:00:52 crc kubenswrapper[4688]: I0930 18:00:52.548227 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:00:52 crc kubenswrapper[4688]: I0930 18:00:52.548453 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:00:52 crc kubenswrapper[4688]: I0930 18:00:52.549764 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:00:52 crc kubenswrapper[4688]: I0930 18:00:52.550051 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577" gracePeriod=600 Sep 30 18:00:53 crc kubenswrapper[4688]: I0930 18:00:53.273562 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577" exitCode=0 Sep 30 18:00:53 crc kubenswrapper[4688]: I0930 18:00:53.273607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577"} Sep 30 18:00:53 crc kubenswrapper[4688]: I0930 18:00:53.274113 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1"} Sep 30 18:00:53 crc kubenswrapper[4688]: I0930 18:00:53.274136 4688 scope.go:117] "RemoveContainer" containerID="1875aed0305fba2dfe10ad936a5cd97484bf2b2276aff95f2b09fda542de4c01" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.181357 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29320921-spfn9"] Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.183263 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.201068 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29320921-spfn9"] Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.254730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.254808 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.254842 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.255046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcmx\" (UniqueName: \"kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.356949 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.357098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjcmx\" (UniqueName: \"kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.357248 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.357280 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.367631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.367659 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.367702 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.375951 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjcmx\" (UniqueName: \"kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx\") pod \"keystone-cron-29320921-spfn9\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.505994 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:00 crc kubenswrapper[4688]: I0930 18:01:00.946994 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29320921-spfn9"] Sep 30 18:01:01 crc kubenswrapper[4688]: I0930 18:01:01.348094 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320921-spfn9" event={"ID":"38f66d5e-b38e-480b-8537-befc7512cc9e","Type":"ContainerStarted","Data":"8d9cc1dbea1e652ef92ed06f14f19f150a4c399414b4abc2f2ba5d391eba4845"} Sep 30 18:01:01 crc kubenswrapper[4688]: I0930 18:01:01.349132 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320921-spfn9" event={"ID":"38f66d5e-b38e-480b-8537-befc7512cc9e","Type":"ContainerStarted","Data":"41f40b8ee4eaccdec74412c5f340c0ee1e0a0a3f46cfb1195790dc4947e0c53a"} Sep 30 18:01:01 crc kubenswrapper[4688]: I0930 18:01:01.376056 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29320921-spfn9" podStartSLOduration=1.376031441 podStartE2EDuration="1.376031441s" podCreationTimestamp="2025-09-30 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 18:01:01.364083152 +0000 UTC m=+2718.659520730" watchObservedRunningTime="2025-09-30 18:01:01.376031441 +0000 UTC m=+2718.671469029" Sep 30 18:01:03 crc kubenswrapper[4688]: I0930 18:01:03.367288 4688 generic.go:334] "Generic (PLEG): container finished" podID="38f66d5e-b38e-480b-8537-befc7512cc9e" containerID="8d9cc1dbea1e652ef92ed06f14f19f150a4c399414b4abc2f2ba5d391eba4845" exitCode=0 Sep 30 18:01:03 crc kubenswrapper[4688]: I0930 18:01:03.367375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320921-spfn9" event={"ID":"38f66d5e-b38e-480b-8537-befc7512cc9e","Type":"ContainerDied","Data":"8d9cc1dbea1e652ef92ed06f14f19f150a4c399414b4abc2f2ba5d391eba4845"} Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.719698 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.750556 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data\") pod \"38f66d5e-b38e-480b-8537-befc7512cc9e\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.750648 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys\") pod \"38f66d5e-b38e-480b-8537-befc7512cc9e\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.750685 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjcmx\" (UniqueName: \"kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx\") pod \"38f66d5e-b38e-480b-8537-befc7512cc9e\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.750766 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle\") pod \"38f66d5e-b38e-480b-8537-befc7512cc9e\" (UID: \"38f66d5e-b38e-480b-8537-befc7512cc9e\") " Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.757954 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "38f66d5e-b38e-480b-8537-befc7512cc9e" (UID: "38f66d5e-b38e-480b-8537-befc7512cc9e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.760708 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx" (OuterVolumeSpecName: "kube-api-access-wjcmx") pod "38f66d5e-b38e-480b-8537-befc7512cc9e" (UID: "38f66d5e-b38e-480b-8537-befc7512cc9e"). InnerVolumeSpecName "kube-api-access-wjcmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.782280 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38f66d5e-b38e-480b-8537-befc7512cc9e" (UID: "38f66d5e-b38e-480b-8537-befc7512cc9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.823972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data" (OuterVolumeSpecName: "config-data") pod "38f66d5e-b38e-480b-8537-befc7512cc9e" (UID: "38f66d5e-b38e-480b-8537-befc7512cc9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.852687 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjcmx\" (UniqueName: \"kubernetes.io/projected/38f66d5e-b38e-480b-8537-befc7512cc9e-kube-api-access-wjcmx\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.852795 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.852811 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:04 crc kubenswrapper[4688]: I0930 18:01:04.852823 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38f66d5e-b38e-480b-8537-befc7512cc9e-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:05 crc kubenswrapper[4688]: I0930 18:01:05.397202 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320921-spfn9" event={"ID":"38f66d5e-b38e-480b-8537-befc7512cc9e","Type":"ContainerDied","Data":"41f40b8ee4eaccdec74412c5f340c0ee1e0a0a3f46cfb1195790dc4947e0c53a"} Sep 30 18:01:05 crc kubenswrapper[4688]: I0930 18:01:05.397254 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f40b8ee4eaccdec74412c5f340c0ee1e0a0a3f46cfb1195790dc4947e0c53a" Sep 30 18:01:05 crc kubenswrapper[4688]: I0930 18:01:05.397268 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320921-spfn9" Sep 30 18:01:14 crc kubenswrapper[4688]: I0930 18:01:14.483989 4688 generic.go:334] "Generic (PLEG): container finished" podID="01be6ded-891d-4d74-9af3-2adcc0d9ebb4" containerID="2433699038bc71fb4a667a759b98dc3aa90a0bdbfc8ff030e7aa79ade929126d" exitCode=0 Sep 30 18:01:14 crc kubenswrapper[4688]: I0930 18:01:14.484378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" event={"ID":"01be6ded-891d-4d74-9af3-2adcc0d9ebb4","Type":"ContainerDied","Data":"2433699038bc71fb4a667a759b98dc3aa90a0bdbfc8ff030e7aa79ade929126d"} Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.917901 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.971534 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle\") pod \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.971692 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0\") pod \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.971830 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key\") pod \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.971878 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory\") pod \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.972003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nv2x\" (UniqueName: \"kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x\") pod \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\" (UID: \"01be6ded-891d-4d74-9af3-2adcc0d9ebb4\") " Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.977743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "01be6ded-891d-4d74-9af3-2adcc0d9ebb4" (UID: "01be6ded-891d-4d74-9af3-2adcc0d9ebb4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.977752 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x" (OuterVolumeSpecName: "kube-api-access-5nv2x") pod "01be6ded-891d-4d74-9af3-2adcc0d9ebb4" (UID: "01be6ded-891d-4d74-9af3-2adcc0d9ebb4"). InnerVolumeSpecName "kube-api-access-5nv2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:01:15 crc kubenswrapper[4688]: I0930 18:01:15.995335 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "01be6ded-891d-4d74-9af3-2adcc0d9ebb4" (UID: "01be6ded-891d-4d74-9af3-2adcc0d9ebb4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.001037 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory" (OuterVolumeSpecName: "inventory") pod "01be6ded-891d-4d74-9af3-2adcc0d9ebb4" (UID: "01be6ded-891d-4d74-9af3-2adcc0d9ebb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.016770 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "01be6ded-891d-4d74-9af3-2adcc0d9ebb4" (UID: "01be6ded-891d-4d74-9af3-2adcc0d9ebb4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.075212 4688 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.075283 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.075310 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.075334 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nv2x\" (UniqueName: \"kubernetes.io/projected/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-kube-api-access-5nv2x\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.075359 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be6ded-891d-4d74-9af3-2adcc0d9ebb4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.510158 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" event={"ID":"01be6ded-891d-4d74-9af3-2adcc0d9ebb4","Type":"ContainerDied","Data":"d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6"} Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.510216 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f81f32a565f593336c28662e5f88a8e24e66b1cb8dedc497e5f1b6ec284ed6" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.510245 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjw26" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.611734 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd"] Sep 30 18:01:16 crc kubenswrapper[4688]: E0930 18:01:16.612145 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01be6ded-891d-4d74-9af3-2adcc0d9ebb4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.612160 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="01be6ded-891d-4d74-9af3-2adcc0d9ebb4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 18:01:16 crc kubenswrapper[4688]: E0930 18:01:16.612204 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f66d5e-b38e-480b-8537-befc7512cc9e" containerName="keystone-cron" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.612210 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f66d5e-b38e-480b-8537-befc7512cc9e" containerName="keystone-cron" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.612372 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="01be6ded-891d-4d74-9af3-2adcc0d9ebb4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.612389 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f66d5e-b38e-480b-8537-befc7512cc9e" containerName="keystone-cron" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.613046 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.617264 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.617520 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.617783 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.618010 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.618213 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.618391 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.624485 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd"] Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.686734 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.686816 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.686869 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.686924 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.686995 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.687219 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcnmq\" (UniqueName: \"kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789003 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcnmq\" (UniqueName: \"kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789121 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789141 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789211 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.789229 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.795982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.796021 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.796715 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.797475 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.798669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.812534 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcnmq\" (UniqueName: \"kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:16 crc kubenswrapper[4688]: I0930 18:01:16.939030 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:01:17 crc kubenswrapper[4688]: I0930 18:01:17.445942 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd"] Sep 30 18:01:17 crc kubenswrapper[4688]: I0930 18:01:17.519705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" event={"ID":"a36a505a-e276-4557-b4a5-dab7af0803fd","Type":"ContainerStarted","Data":"93ab8ced397274ace6ad3d0c90ae6355cb2629082a02d8b0a37c6af35ff231ba"} Sep 30 18:01:18 crc kubenswrapper[4688]: I0930 18:01:18.529225 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" event={"ID":"a36a505a-e276-4557-b4a5-dab7af0803fd","Type":"ContainerStarted","Data":"d628aa2c7143ba57972c732a09a8675e8d9e1b42fd31e6730ebe03f232e9b63f"} Sep 30 18:01:18 crc kubenswrapper[4688]: I0930 18:01:18.553522 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" podStartSLOduration=1.882971515 podStartE2EDuration="2.553506527s" podCreationTimestamp="2025-09-30 18:01:16 +0000 UTC" firstStartedPulling="2025-09-30 18:01:17.460347025 +0000 UTC m=+2734.755784593" lastFinishedPulling="2025-09-30 18:01:18.130882037 +0000 UTC m=+2735.426319605" observedRunningTime="2025-09-30 18:01:18.552488821 +0000 UTC m=+2735.847926389" watchObservedRunningTime="2025-09-30 18:01:18.553506527 +0000 UTC m=+2735.848944085" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.349487 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.352324 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.359882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.501300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.501430 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.501494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkhv\" (UniqueName: \"kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.602985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.603091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.603125 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkhv\" (UniqueName: \"kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.604828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.605109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.630758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkhv\" (UniqueName: \"kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv\") pod \"redhat-marketplace-7d8jd\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:58 crc kubenswrapper[4688]: I0930 18:01:58.677321 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:01:59 crc kubenswrapper[4688]: I0930 18:01:59.178649 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:01:59 crc kubenswrapper[4688]: I0930 18:01:59.911824 4688 generic.go:334] "Generic (PLEG): container finished" podID="4767b365-6797-4895-a958-32e74e621355" containerID="429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb" exitCode=0 Sep 30 18:01:59 crc kubenswrapper[4688]: I0930 18:01:59.911933 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerDied","Data":"429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb"} Sep 30 18:01:59 crc kubenswrapper[4688]: I0930 18:01:59.912230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerStarted","Data":"7682c6710765a79e02f7f5803ac0c8b0f1912e55221b4e00f54445de0b758674"} Sep 30 18:01:59 crc kubenswrapper[4688]: I0930 18:01:59.914865 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:02:00 crc kubenswrapper[4688]: I0930 18:02:00.924432 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerStarted","Data":"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae"} Sep 30 18:02:01 crc kubenswrapper[4688]: I0930 18:02:01.940595 4688 generic.go:334] "Generic (PLEG): container finished" podID="4767b365-6797-4895-a958-32e74e621355" containerID="d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae" exitCode=0 Sep 30 18:02:01 crc kubenswrapper[4688]: I0930 18:02:01.940640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerDied","Data":"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae"} Sep 30 18:02:02 crc kubenswrapper[4688]: I0930 18:02:02.950945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerStarted","Data":"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c"} Sep 30 18:02:02 crc kubenswrapper[4688]: I0930 18:02:02.974025 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7d8jd" podStartSLOduration=2.331064704 podStartE2EDuration="4.973975478s" podCreationTimestamp="2025-09-30 18:01:58 +0000 UTC" firstStartedPulling="2025-09-30 18:01:59.914198633 +0000 UTC m=+2777.209636201" lastFinishedPulling="2025-09-30 18:02:02.557109417 +0000 UTC m=+2779.852546975" observedRunningTime="2025-09-30 18:02:02.973640289 +0000 UTC m=+2780.269077847" watchObservedRunningTime="2025-09-30 18:02:02.973975478 +0000 UTC m=+2780.269413036" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.736104 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.741092 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.750310 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.858035 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965tb\" (UniqueName: \"kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.858212 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.858264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.959758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-965tb\" (UniqueName: \"kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.960166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.960329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.960627 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.960657 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:04 crc kubenswrapper[4688]: I0930 18:02:04.981203 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-965tb\" (UniqueName: \"kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb\") pod \"certified-operators-hszf2\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:05 crc kubenswrapper[4688]: I0930 18:02:05.173046 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:05 crc kubenswrapper[4688]: I0930 18:02:05.665418 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:05 crc kubenswrapper[4688]: W0930 18:02:05.666468 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb50354de_aa5a_469b_8e20_3a62f12f4efb.slice/crio-cda4ee93522bfef76fdb7a8dcfe0c876a0367b0b0cc6c850ed38a43d97e74b54 WatchSource:0}: Error finding container cda4ee93522bfef76fdb7a8dcfe0c876a0367b0b0cc6c850ed38a43d97e74b54: Status 404 returned error can't find the container with id cda4ee93522bfef76fdb7a8dcfe0c876a0367b0b0cc6c850ed38a43d97e74b54 Sep 30 18:02:05 crc kubenswrapper[4688]: I0930 18:02:05.978550 4688 generic.go:334] "Generic (PLEG): container finished" podID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerID="31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1" exitCode=0 Sep 30 18:02:05 crc kubenswrapper[4688]: I0930 18:02:05.978614 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerDied","Data":"31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1"} Sep 30 18:02:05 crc kubenswrapper[4688]: I0930 18:02:05.978645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerStarted","Data":"cda4ee93522bfef76fdb7a8dcfe0c876a0367b0b0cc6c850ed38a43d97e74b54"} Sep 30 18:02:06 crc kubenswrapper[4688]: I0930 18:02:06.988793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerStarted","Data":"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7"} Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.142984 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.144919 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.165366 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.200474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjhm\" (UniqueName: \"kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.200564 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.200706 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.302207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.302364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.302528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjhm\" (UniqueName: \"kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.302828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.302855 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.333493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjhm\" (UniqueName: \"kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm\") pod \"community-operators-ztw6f\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.488373 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.997449 4688 generic.go:334] "Generic (PLEG): container finished" podID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerID="42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7" exitCode=0 Sep 30 18:02:07 crc kubenswrapper[4688]: I0930 18:02:07.997493 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerDied","Data":"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7"} Sep 30 18:02:08 crc kubenswrapper[4688]: I0930 18:02:08.114101 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:08 crc kubenswrapper[4688]: W0930 18:02:08.118717 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf39bf_45fd_42e1_a754_bd5193f198e9.slice/crio-86e761b2636e2a6786dc0fee946fccce3946ea344f585b44de7f39454319bd33 WatchSource:0}: Error finding container 86e761b2636e2a6786dc0fee946fccce3946ea344f585b44de7f39454319bd33: Status 404 returned error can't find the container with id 86e761b2636e2a6786dc0fee946fccce3946ea344f585b44de7f39454319bd33 Sep 30 18:02:08 crc kubenswrapper[4688]: I0930 18:02:08.679587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:08 crc kubenswrapper[4688]: I0930 18:02:08.679988 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:08 crc kubenswrapper[4688]: I0930 18:02:08.746314 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.008288 4688 generic.go:334] "Generic (PLEG): container finished" podID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerID="1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd" exitCode=0 Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.008384 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerDied","Data":"1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd"} Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.008416 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerStarted","Data":"86e761b2636e2a6786dc0fee946fccce3946ea344f585b44de7f39454319bd33"} Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.013728 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerStarted","Data":"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f"} Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.048867 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hszf2" podStartSLOduration=2.5930860449999997 podStartE2EDuration="5.04884526s" podCreationTimestamp="2025-09-30 18:02:04 +0000 UTC" firstStartedPulling="2025-09-30 18:02:05.980138693 +0000 UTC m=+2783.275576251" lastFinishedPulling="2025-09-30 18:02:08.435897908 +0000 UTC m=+2785.731335466" observedRunningTime="2025-09-30 18:02:09.045496814 +0000 UTC m=+2786.340934382" watchObservedRunningTime="2025-09-30 18:02:09.04884526 +0000 UTC m=+2786.344282818" Sep 30 18:02:09 crc kubenswrapper[4688]: I0930 18:02:09.080783 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:10 crc kubenswrapper[4688]: I0930 18:02:10.024354 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerStarted","Data":"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff"} Sep 30 18:02:11 crc kubenswrapper[4688]: I0930 18:02:11.035153 4688 generic.go:334] "Generic (PLEG): container finished" podID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerID="fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff" exitCode=0 Sep 30 18:02:11 crc kubenswrapper[4688]: I0930 18:02:11.035195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerDied","Data":"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff"} Sep 30 18:02:11 crc kubenswrapper[4688]: I0930 18:02:11.518175 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:02:11 crc kubenswrapper[4688]: I0930 18:02:11.518666 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7d8jd" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="registry-server" containerID="cri-o://06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c" gracePeriod=2 Sep 30 18:02:11 crc kubenswrapper[4688]: I0930 18:02:11.981384 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.049171 4688 generic.go:334] "Generic (PLEG): container finished" podID="4767b365-6797-4895-a958-32e74e621355" containerID="06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c" exitCode=0 Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.049517 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerDied","Data":"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c"} Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.049554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7d8jd" event={"ID":"4767b365-6797-4895-a958-32e74e621355","Type":"ContainerDied","Data":"7682c6710765a79e02f7f5803ac0c8b0f1912e55221b4e00f54445de0b758674"} Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.049596 4688 scope.go:117] "RemoveContainer" containerID="06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.049796 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7d8jd" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.085202 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities\") pod \"4767b365-6797-4895-a958-32e74e621355\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.085312 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppkhv\" (UniqueName: \"kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv\") pod \"4767b365-6797-4895-a958-32e74e621355\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.085593 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content\") pod \"4767b365-6797-4895-a958-32e74e621355\" (UID: \"4767b365-6797-4895-a958-32e74e621355\") " Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.086959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities" (OuterVolumeSpecName: "utilities") pod "4767b365-6797-4895-a958-32e74e621355" (UID: "4767b365-6797-4895-a958-32e74e621355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.094034 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv" (OuterVolumeSpecName: "kube-api-access-ppkhv") pod "4767b365-6797-4895-a958-32e74e621355" (UID: "4767b365-6797-4895-a958-32e74e621355"). InnerVolumeSpecName "kube-api-access-ppkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.140303 4688 scope.go:117] "RemoveContainer" containerID="d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.151660 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4767b365-6797-4895-a958-32e74e621355" (UID: "4767b365-6797-4895-a958-32e74e621355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.189170 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.189205 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4767b365-6797-4895-a958-32e74e621355-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.189219 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppkhv\" (UniqueName: \"kubernetes.io/projected/4767b365-6797-4895-a958-32e74e621355-kube-api-access-ppkhv\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.191409 4688 scope.go:117] "RemoveContainer" containerID="429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.228921 4688 scope.go:117] "RemoveContainer" containerID="06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c" Sep 30 18:02:12 crc kubenswrapper[4688]: E0930 18:02:12.229338 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c\": container with ID starting with 06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c not found: ID does not exist" containerID="06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.229380 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c"} err="failed to get container status \"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c\": rpc error: code = NotFound desc = could not find container \"06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c\": container with ID starting with 06dc098f74f0341c5eac75a9a96481f73e3c41eff972f8992f2e389844d3238c not found: ID does not exist" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.229405 4688 scope.go:117] "RemoveContainer" containerID="d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae" Sep 30 18:02:12 crc kubenswrapper[4688]: E0930 18:02:12.229722 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae\": container with ID starting with d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae not found: ID does not exist" containerID="d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.229743 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae"} err="failed to get container status \"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae\": rpc error: code = NotFound desc = could not find container \"d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae\": container with ID starting with d2b49faa2dea01a0d5fe6e150f694faf007cb1fb7e9cf0888551bb083177cfae not found: ID does not exist" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.229756 4688 scope.go:117] "RemoveContainer" containerID="429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb" Sep 30 18:02:12 crc kubenswrapper[4688]: E0930 18:02:12.233752 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb\": container with ID starting with 429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb not found: ID does not exist" containerID="429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.233792 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb"} err="failed to get container status \"429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb\": rpc error: code = NotFound desc = could not find container \"429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb\": container with ID starting with 429f9029c3c9913c39944807a0947c31fda47ab9de6640faa12d29b086dc82fb not found: ID does not exist" Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.389529 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:02:12 crc kubenswrapper[4688]: I0930 18:02:12.398235 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7d8jd"] Sep 30 18:02:13 crc kubenswrapper[4688]: I0930 18:02:13.060595 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerStarted","Data":"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a"} Sep 30 18:02:13 crc kubenswrapper[4688]: I0930 18:02:13.089404 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ztw6f" podStartSLOduration=3.261113191 podStartE2EDuration="6.089384583s" podCreationTimestamp="2025-09-30 18:02:07 +0000 UTC" firstStartedPulling="2025-09-30 18:02:09.009948812 +0000 UTC m=+2786.305386410" lastFinishedPulling="2025-09-30 18:02:11.838220244 +0000 UTC m=+2789.133657802" observedRunningTime="2025-09-30 18:02:13.085714818 +0000 UTC m=+2790.381152386" watchObservedRunningTime="2025-09-30 18:02:13.089384583 +0000 UTC m=+2790.384822141" Sep 30 18:02:13 crc kubenswrapper[4688]: I0930 18:02:13.441012 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4767b365-6797-4895-a958-32e74e621355" path="/var/lib/kubelet/pods/4767b365-6797-4895-a958-32e74e621355/volumes" Sep 30 18:02:15 crc kubenswrapper[4688]: I0930 18:02:15.174260 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:15 crc kubenswrapper[4688]: I0930 18:02:15.174694 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:15 crc kubenswrapper[4688]: I0930 18:02:15.221400 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:16 crc kubenswrapper[4688]: I0930 18:02:16.086392 4688 generic.go:334] "Generic (PLEG): container finished" podID="a36a505a-e276-4557-b4a5-dab7af0803fd" containerID="d628aa2c7143ba57972c732a09a8675e8d9e1b42fd31e6730ebe03f232e9b63f" exitCode=0 Sep 30 18:02:16 crc kubenswrapper[4688]: I0930 18:02:16.087198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" event={"ID":"a36a505a-e276-4557-b4a5-dab7af0803fd","Type":"ContainerDied","Data":"d628aa2c7143ba57972c732a09a8675e8d9e1b42fd31e6730ebe03f232e9b63f"} Sep 30 18:02:16 crc kubenswrapper[4688]: I0930 18:02:16.139087 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.119900 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.489408 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.489708 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.541244 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.569283 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600326 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcnmq\" (UniqueName: \"kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600397 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.600418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle\") pod \"a36a505a-e276-4557-b4a5-dab7af0803fd\" (UID: \"a36a505a-e276-4557-b4a5-dab7af0803fd\") " Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.607096 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq" (OuterVolumeSpecName: "kube-api-access-vcnmq") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "kube-api-access-vcnmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.616959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.634473 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.634559 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.636403 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.645444 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory" (OuterVolumeSpecName: "inventory") pod "a36a505a-e276-4557-b4a5-dab7af0803fd" (UID: "a36a505a-e276-4557-b4a5-dab7af0803fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701916 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701954 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701966 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701977 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcnmq\" (UniqueName: \"kubernetes.io/projected/a36a505a-e276-4557-b4a5-dab7af0803fd-kube-api-access-vcnmq\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701986 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:17 crc kubenswrapper[4688]: I0930 18:02:17.701994 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a36a505a-e276-4557-b4a5-dab7af0803fd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.102735 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" event={"ID":"a36a505a-e276-4557-b4a5-dab7af0803fd","Type":"ContainerDied","Data":"93ab8ced397274ace6ad3d0c90ae6355cb2629082a02d8b0a37c6af35ff231ba"} Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.102779 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.102797 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ab8ced397274ace6ad3d0c90ae6355cb2629082a02d8b0a37c6af35ff231ba" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.103449 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hszf2" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="registry-server" containerID="cri-o://9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f" gracePeriod=2 Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.167295 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.231657 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn"] Sep 30 18:02:18 crc kubenswrapper[4688]: E0930 18:02:18.232034 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36a505a-e276-4557-b4a5-dab7af0803fd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232055 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36a505a-e276-4557-b4a5-dab7af0803fd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 18:02:18 crc kubenswrapper[4688]: E0930 18:02:18.232078 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="extract-utilities" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232086 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="extract-utilities" Sep 30 18:02:18 crc kubenswrapper[4688]: E0930 18:02:18.232124 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="registry-server" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232133 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="registry-server" Sep 30 18:02:18 crc kubenswrapper[4688]: E0930 18:02:18.232146 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="extract-content" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232152 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="extract-content" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232367 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36a505a-e276-4557-b4a5-dab7af0803fd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232386 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4767b365-6797-4895-a958-32e74e621355" containerName="registry-server" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.232995 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.235064 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.235094 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.235404 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.235543 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.235699 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.245870 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn"] Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.317170 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.317250 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.317299 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvfr\" (UniqueName: \"kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.317375 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.317425 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.419176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.419261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.419307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvfr\" (UniqueName: \"kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.419354 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.419394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.424277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.424277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.424424 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.424489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.436542 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvfr\" (UniqueName: \"kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wghcn\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:18 crc kubenswrapper[4688]: I0930 18:02:18.577432 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.105954 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.113975 4688 generic.go:334] "Generic (PLEG): container finished" podID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerID="9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f" exitCode=0 Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.115157 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hszf2" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.115357 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerDied","Data":"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f"} Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.115396 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hszf2" event={"ID":"b50354de-aa5a-469b-8e20-3a62f12f4efb","Type":"ContainerDied","Data":"cda4ee93522bfef76fdb7a8dcfe0c876a0367b0b0cc6c850ed38a43d97e74b54"} Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.115417 4688 scope.go:117] "RemoveContainer" containerID="9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.137207 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities\") pod \"b50354de-aa5a-469b-8e20-3a62f12f4efb\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.137330 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content\") pod \"b50354de-aa5a-469b-8e20-3a62f12f4efb\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.137388 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-965tb\" (UniqueName: \"kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb\") pod \"b50354de-aa5a-469b-8e20-3a62f12f4efb\" (UID: \"b50354de-aa5a-469b-8e20-3a62f12f4efb\") " Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.139128 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities" (OuterVolumeSpecName: "utilities") pod "b50354de-aa5a-469b-8e20-3a62f12f4efb" (UID: "b50354de-aa5a-469b-8e20-3a62f12f4efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.143353 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn"] Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.147200 4688 scope.go:117] "RemoveContainer" containerID="42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.148800 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb" (OuterVolumeSpecName: "kube-api-access-965tb") pod "b50354de-aa5a-469b-8e20-3a62f12f4efb" (UID: "b50354de-aa5a-469b-8e20-3a62f12f4efb"). InnerVolumeSpecName "kube-api-access-965tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.193613 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b50354de-aa5a-469b-8e20-3a62f12f4efb" (UID: "b50354de-aa5a-469b-8e20-3a62f12f4efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.207931 4688 scope.go:117] "RemoveContainer" containerID="31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.226232 4688 scope.go:117] "RemoveContainer" containerID="9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f" Sep 30 18:02:19 crc kubenswrapper[4688]: E0930 18:02:19.226794 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f\": container with ID starting with 9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f not found: ID does not exist" containerID="9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.226846 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f"} err="failed to get container status \"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f\": rpc error: code = NotFound desc = could not find container \"9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f\": container with ID starting with 9afb69bdfb47527dc0f43b4037bf1d8f9730a5da1b080f5ef321e5dc97c79c1f not found: ID does not exist" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.226888 4688 scope.go:117] "RemoveContainer" containerID="42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7" Sep 30 18:02:19 crc kubenswrapper[4688]: E0930 18:02:19.227190 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7\": container with ID starting with 42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7 not found: ID does not exist" containerID="42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.227218 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7"} err="failed to get container status \"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7\": rpc error: code = NotFound desc = could not find container \"42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7\": container with ID starting with 42eaf0368b2e13b84214e58d1661e4ff3fd46b907c9305c41631469c837139d7 not found: ID does not exist" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.227261 4688 scope.go:117] "RemoveContainer" containerID="31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1" Sep 30 18:02:19 crc kubenswrapper[4688]: E0930 18:02:19.227656 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1\": container with ID starting with 31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1 not found: ID does not exist" containerID="31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.227674 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1"} err="failed to get container status \"31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1\": rpc error: code = NotFound desc = could not find container \"31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1\": container with ID starting with 31f895f73ecbe6e95433d6ac46ff1f79b94b9ae178ced8cbdb024d8fd2b6bcd1 not found: ID does not exist" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.238990 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.239035 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-965tb\" (UniqueName: \"kubernetes.io/projected/b50354de-aa5a-469b-8e20-3a62f12f4efb-kube-api-access-965tb\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.239051 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b50354de-aa5a-469b-8e20-3a62f12f4efb-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.445457 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.454278 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hszf2"] Sep 30 18:02:19 crc kubenswrapper[4688]: I0930 18:02:19.924080 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.124701 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" event={"ID":"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8","Type":"ContainerStarted","Data":"53d48476ec20606d88c8f8fbfa5571496c263ac6aa3c1b8b39e9a520f5e70c0e"} Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.124780 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" event={"ID":"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8","Type":"ContainerStarted","Data":"3abcbb5fca0b71fc665d1f5fd1a10c5a10b1a766347a4be7de560a8793c9a737"} Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.124808 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ztw6f" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="registry-server" containerID="cri-o://744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a" gracePeriod=2 Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.147179 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" podStartSLOduration=1.649416907 podStartE2EDuration="2.147156384s" podCreationTimestamp="2025-09-30 18:02:18 +0000 UTC" firstStartedPulling="2025-09-30 18:02:19.161634858 +0000 UTC m=+2796.457072416" lastFinishedPulling="2025-09-30 18:02:19.659374335 +0000 UTC m=+2796.954811893" observedRunningTime="2025-09-30 18:02:20.143663434 +0000 UTC m=+2797.439100992" watchObservedRunningTime="2025-09-30 18:02:20.147156384 +0000 UTC m=+2797.442593942" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.539756 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.665457 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjhm\" (UniqueName: \"kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm\") pod \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.665594 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities\") pod \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.665724 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content\") pod \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\" (UID: \"bfbf39bf-45fd-42e1-a754-bd5193f198e9\") " Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.666681 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities" (OuterVolumeSpecName: "utilities") pod "bfbf39bf-45fd-42e1-a754-bd5193f198e9" (UID: "bfbf39bf-45fd-42e1-a754-bd5193f198e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.672002 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm" (OuterVolumeSpecName: "kube-api-access-wfjhm") pod "bfbf39bf-45fd-42e1-a754-bd5193f198e9" (UID: "bfbf39bf-45fd-42e1-a754-bd5193f198e9"). InnerVolumeSpecName "kube-api-access-wfjhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.734934 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfbf39bf-45fd-42e1-a754-bd5193f198e9" (UID: "bfbf39bf-45fd-42e1-a754-bd5193f198e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.767968 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.768015 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjhm\" (UniqueName: \"kubernetes.io/projected/bfbf39bf-45fd-42e1-a754-bd5193f198e9-kube-api-access-wfjhm\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:20 crc kubenswrapper[4688]: I0930 18:02:20.768030 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbf39bf-45fd-42e1-a754-bd5193f198e9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.138542 4688 generic.go:334] "Generic (PLEG): container finished" podID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerID="744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a" exitCode=0 Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.138602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerDied","Data":"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a"} Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.138895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztw6f" event={"ID":"bfbf39bf-45fd-42e1-a754-bd5193f198e9","Type":"ContainerDied","Data":"86e761b2636e2a6786dc0fee946fccce3946ea344f585b44de7f39454319bd33"} Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.138920 4688 scope.go:117] "RemoveContainer" containerID="744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.138609 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztw6f" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.163891 4688 scope.go:117] "RemoveContainer" containerID="fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.173329 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.187237 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ztw6f"] Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.193182 4688 scope.go:117] "RemoveContainer" containerID="1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.234439 4688 scope.go:117] "RemoveContainer" containerID="744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a" Sep 30 18:02:21 crc kubenswrapper[4688]: E0930 18:02:21.234945 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a\": container with ID starting with 744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a not found: ID does not exist" containerID="744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.234981 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a"} err="failed to get container status \"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a\": rpc error: code = NotFound desc = could not find container \"744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a\": container with ID starting with 744c8a03455fe6be781a243ad8f30abd6c71af73eddc68541cead6a395e0e77a not found: ID does not exist" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.235002 4688 scope.go:117] "RemoveContainer" containerID="fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff" Sep 30 18:02:21 crc kubenswrapper[4688]: E0930 18:02:21.235214 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff\": container with ID starting with fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff not found: ID does not exist" containerID="fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.235236 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff"} err="failed to get container status \"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff\": rpc error: code = NotFound desc = could not find container \"fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff\": container with ID starting with fe2cf0fb1edecc9b33b54e43561f85679d2159d6e36cf922fbbbf4ffb07b24ff not found: ID does not exist" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.235250 4688 scope.go:117] "RemoveContainer" containerID="1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd" Sep 30 18:02:21 crc kubenswrapper[4688]: E0930 18:02:21.235433 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd\": container with ID starting with 1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd not found: ID does not exist" containerID="1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.235453 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd"} err="failed to get container status \"1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd\": rpc error: code = NotFound desc = could not find container \"1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd\": container with ID starting with 1daa36882866f1d23e1f164a785d6ee2741b436d652d20e7aa4325d2ace185dd not found: ID does not exist" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.464134 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" path="/var/lib/kubelet/pods/b50354de-aa5a-469b-8e20-3a62f12f4efb/volumes" Sep 30 18:02:21 crc kubenswrapper[4688]: I0930 18:02:21.465057 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" path="/var/lib/kubelet/pods/bfbf39bf-45fd-42e1-a754-bd5193f198e9/volumes" Sep 30 18:02:52 crc kubenswrapper[4688]: I0930 18:02:52.545364 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:02:52 crc kubenswrapper[4688]: I0930 18:02:52.545995 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:03:22 crc kubenswrapper[4688]: I0930 18:03:22.545994 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:03:22 crc kubenswrapper[4688]: I0930 18:03:22.546876 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:03:52 crc kubenswrapper[4688]: I0930 18:03:52.545881 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:03:52 crc kubenswrapper[4688]: I0930 18:03:52.546483 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:03:52 crc kubenswrapper[4688]: I0930 18:03:52.546526 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:03:52 crc kubenswrapper[4688]: I0930 18:03:52.547315 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:03:52 crc kubenswrapper[4688]: I0930 18:03:52.547373 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" gracePeriod=600 Sep 30 18:03:52 crc kubenswrapper[4688]: E0930 18:03:52.695662 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:03:53 crc kubenswrapper[4688]: I0930 18:03:53.003040 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" exitCode=0 Sep 30 18:03:53 crc kubenswrapper[4688]: I0930 18:03:53.003081 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1"} Sep 30 18:03:53 crc kubenswrapper[4688]: I0930 18:03:53.003115 4688 scope.go:117] "RemoveContainer" containerID="8783ed068e5c3d367e6e14871549f65c0a836962ecd2c9a8d140decb78493577" Sep 30 18:03:53 crc kubenswrapper[4688]: I0930 18:03:53.003998 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:03:53 crc kubenswrapper[4688]: E0930 18:03:53.004431 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:04:08 crc kubenswrapper[4688]: I0930 18:04:08.429482 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:04:08 crc kubenswrapper[4688]: E0930 18:04:08.430319 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:04:22 crc kubenswrapper[4688]: I0930 18:04:22.430138 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:04:22 crc kubenswrapper[4688]: E0930 18:04:22.430980 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:04:36 crc kubenswrapper[4688]: I0930 18:04:36.429398 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:04:36 crc kubenswrapper[4688]: E0930 18:04:36.430104 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:04:50 crc kubenswrapper[4688]: I0930 18:04:50.429369 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:04:50 crc kubenswrapper[4688]: E0930 18:04:50.430264 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:05:02 crc kubenswrapper[4688]: I0930 18:05:02.430246 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:05:02 crc kubenswrapper[4688]: E0930 18:05:02.431133 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:05:17 crc kubenswrapper[4688]: I0930 18:05:17.430376 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:05:17 crc kubenswrapper[4688]: E0930 18:05:17.431128 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:05:28 crc kubenswrapper[4688]: I0930 18:05:28.430343 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:05:28 crc kubenswrapper[4688]: E0930 18:05:28.431234 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:05:43 crc kubenswrapper[4688]: I0930 18:05:43.438067 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:05:43 crc kubenswrapper[4688]: E0930 18:05:43.440667 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:05:55 crc kubenswrapper[4688]: I0930 18:05:55.429164 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:05:55 crc kubenswrapper[4688]: E0930 18:05:55.430088 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:06:09 crc kubenswrapper[4688]: I0930 18:06:09.429330 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:06:09 crc kubenswrapper[4688]: E0930 18:06:09.430021 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:06:20 crc kubenswrapper[4688]: I0930 18:06:20.429904 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:06:20 crc kubenswrapper[4688]: E0930 18:06:20.431116 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:06:32 crc kubenswrapper[4688]: I0930 18:06:32.429944 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:06:32 crc kubenswrapper[4688]: E0930 18:06:32.430897 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:06:46 crc kubenswrapper[4688]: I0930 18:06:46.431388 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:06:46 crc kubenswrapper[4688]: E0930 18:06:46.432102 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:07:01 crc kubenswrapper[4688]: I0930 18:07:01.429242 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:07:01 crc kubenswrapper[4688]: E0930 18:07:01.430079 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:07:12 crc kubenswrapper[4688]: I0930 18:07:12.902078 4688 generic.go:334] "Generic (PLEG): container finished" podID="b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" containerID="53d48476ec20606d88c8f8fbfa5571496c263ac6aa3c1b8b39e9a520f5e70c0e" exitCode=0 Sep 30 18:07:12 crc kubenswrapper[4688]: I0930 18:07:12.902128 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" event={"ID":"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8","Type":"ContainerDied","Data":"53d48476ec20606d88c8f8fbfa5571496c263ac6aa3c1b8b39e9a520f5e70c0e"} Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.321028 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.430072 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:07:14 crc kubenswrapper[4688]: E0930 18:07:14.430588 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.454691 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle\") pod \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.455256 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory\") pod \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.455417 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key\") pod \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.455514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0\") pod \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.455638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crvfr\" (UniqueName: \"kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr\") pod \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\" (UID: \"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8\") " Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.460658 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr" (OuterVolumeSpecName: "kube-api-access-crvfr") pod "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" (UID: "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8"). InnerVolumeSpecName "kube-api-access-crvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.461377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" (UID: "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.485193 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" (UID: "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.486245 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" (UID: "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.494407 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory" (OuterVolumeSpecName: "inventory") pod "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" (UID: "b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.557944 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.557976 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.557985 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.557997 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.558012 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crvfr\" (UniqueName: \"kubernetes.io/projected/b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8-kube-api-access-crvfr\") on node \"crc\" DevicePath \"\"" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.923327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" event={"ID":"b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8","Type":"ContainerDied","Data":"3abcbb5fca0b71fc665d1f5fd1a10c5a10b1a766347a4be7de560a8793c9a737"} Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.923365 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3abcbb5fca0b71fc665d1f5fd1a10c5a10b1a766347a4be7de560a8793c9a737" Sep 30 18:07:14 crc kubenswrapper[4688]: I0930 18:07:14.923414 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wghcn" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047224 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5"] Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047580 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="extract-content" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047593 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="extract-content" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047608 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="extract-utilities" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047616 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="extract-utilities" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047635 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="extract-content" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047644 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="extract-content" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047663 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047669 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047680 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="extract-utilities" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047686 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="extract-utilities" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047701 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047709 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 18:07:15 crc kubenswrapper[4688]: E0930 18:07:15.047730 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047736 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047919 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047939 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbf39bf-45fd-42e1-a754-bd5193f198e9" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.047951 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50354de-aa5a-469b-8e20-3a62f12f4efb" containerName="registry-server" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.048513 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.051493 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.051725 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.052178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.052307 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.052481 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.052756 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.053716 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.063547 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5"] Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065789 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065823 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065867 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065889 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065937 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crfs\" (UniqueName: \"kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.065998 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.066025 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.066042 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167530 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167580 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167647 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167697 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crfs\" (UniqueName: \"kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167761 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.167808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.168671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.171932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.172025 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.172676 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.172785 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.173183 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.173464 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.174515 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.185295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crfs\" (UniqueName: \"kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62pl5\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.410324 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.931961 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5"] Sep 30 18:07:15 crc kubenswrapper[4688]: W0930 18:07:15.941240 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82afbeb4_8c01_47b4_a07c_b9d1fd405e39.slice/crio-1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09 WatchSource:0}: Error finding container 1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09: Status 404 returned error can't find the container with id 1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09 Sep 30 18:07:15 crc kubenswrapper[4688]: I0930 18:07:15.943560 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:07:16 crc kubenswrapper[4688]: I0930 18:07:16.948928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" event={"ID":"82afbeb4-8c01-47b4-a07c-b9d1fd405e39","Type":"ContainerStarted","Data":"e4bcb504f202fd7eebc2860bc86f90dd589fcbfa19c0e5a93780b733f4f03ec8"} Sep 30 18:07:16 crc kubenswrapper[4688]: I0930 18:07:16.950530 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" event={"ID":"82afbeb4-8c01-47b4-a07c-b9d1fd405e39","Type":"ContainerStarted","Data":"1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09"} Sep 30 18:07:16 crc kubenswrapper[4688]: I0930 18:07:16.971398 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" podStartSLOduration=1.550803554 podStartE2EDuration="1.971377962s" podCreationTimestamp="2025-09-30 18:07:15 +0000 UTC" firstStartedPulling="2025-09-30 18:07:15.943363974 +0000 UTC m=+3093.238801532" lastFinishedPulling="2025-09-30 18:07:16.363938342 +0000 UTC m=+3093.659375940" observedRunningTime="2025-09-30 18:07:16.96784043 +0000 UTC m=+3094.263277988" watchObservedRunningTime="2025-09-30 18:07:16.971377962 +0000 UTC m=+3094.266815520" Sep 30 18:07:26 crc kubenswrapper[4688]: I0930 18:07:26.429451 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:07:26 crc kubenswrapper[4688]: E0930 18:07:26.431067 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:07:41 crc kubenswrapper[4688]: I0930 18:07:41.430650 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:07:41 crc kubenswrapper[4688]: E0930 18:07:41.431950 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:07:55 crc kubenswrapper[4688]: I0930 18:07:55.429422 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:07:55 crc kubenswrapper[4688]: E0930 18:07:55.430275 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:08:09 crc kubenswrapper[4688]: I0930 18:08:09.429707 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:08:09 crc kubenswrapper[4688]: E0930 18:08:09.430557 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:08:24 crc kubenswrapper[4688]: I0930 18:08:24.429908 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:08:24 crc kubenswrapper[4688]: E0930 18:08:24.430935 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:08:36 crc kubenswrapper[4688]: I0930 18:08:36.428900 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:08:36 crc kubenswrapper[4688]: E0930 18:08:36.429759 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:08:51 crc kubenswrapper[4688]: I0930 18:08:51.431106 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:08:51 crc kubenswrapper[4688]: E0930 18:08:51.434301 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:09:06 crc kubenswrapper[4688]: I0930 18:09:06.430324 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:09:06 crc kubenswrapper[4688]: I0930 18:09:06.983233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f"} Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.658429 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.662505 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.690895 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.835214 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55zt\" (UniqueName: \"kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.835457 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.835825 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.937859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55zt\" (UniqueName: \"kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.937954 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.938022 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.938635 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.938670 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.959066 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55zt\" (UniqueName: \"kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt\") pod \"redhat-operators-656qk\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:46 crc kubenswrapper[4688]: I0930 18:10:46.988076 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:47 crc kubenswrapper[4688]: I0930 18:10:47.480125 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:10:48 crc kubenswrapper[4688]: I0930 18:10:48.010701 4688 generic.go:334] "Generic (PLEG): container finished" podID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerID="b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e" exitCode=0 Sep 30 18:10:48 crc kubenswrapper[4688]: I0930 18:10:48.010786 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerDied","Data":"b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e"} Sep 30 18:10:48 crc kubenswrapper[4688]: I0930 18:10:48.011060 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerStarted","Data":"c70e2767dd6df6c98c7a04656286a028a18ebbe98b77d6e16c950a0ce2eda9aa"} Sep 30 18:10:50 crc kubenswrapper[4688]: I0930 18:10:50.036124 4688 generic.go:334] "Generic (PLEG): container finished" podID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerID="bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb" exitCode=0 Sep 30 18:10:50 crc kubenswrapper[4688]: I0930 18:10:50.036249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerDied","Data":"bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb"} Sep 30 18:10:51 crc kubenswrapper[4688]: I0930 18:10:51.050195 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerStarted","Data":"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785"} Sep 30 18:10:51 crc kubenswrapper[4688]: I0930 18:10:51.090172 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-656qk" podStartSLOduration=2.457180758 podStartE2EDuration="5.090142282s" podCreationTimestamp="2025-09-30 18:10:46 +0000 UTC" firstStartedPulling="2025-09-30 18:10:48.012755332 +0000 UTC m=+3305.308192890" lastFinishedPulling="2025-09-30 18:10:50.645716856 +0000 UTC m=+3307.941154414" observedRunningTime="2025-09-30 18:10:51.075961575 +0000 UTC m=+3308.371399193" watchObservedRunningTime="2025-09-30 18:10:51.090142282 +0000 UTC m=+3308.385579880" Sep 30 18:10:56 crc kubenswrapper[4688]: I0930 18:10:56.989165 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:56 crc kubenswrapper[4688]: I0930 18:10:56.989495 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:57 crc kubenswrapper[4688]: I0930 18:10:57.046086 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:57 crc kubenswrapper[4688]: I0930 18:10:57.173860 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:57 crc kubenswrapper[4688]: I0930 18:10:57.291011 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.138708 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-656qk" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="registry-server" containerID="cri-o://8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785" gracePeriod=2 Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.598200 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.686115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities\") pod \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.686309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content\") pod \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.686551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55zt\" (UniqueName: \"kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt\") pod \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\" (UID: \"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5\") " Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.686780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities" (OuterVolumeSpecName: "utilities") pod "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" (UID: "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.687870 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.694626 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt" (OuterVolumeSpecName: "kube-api-access-f55zt") pod "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" (UID: "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5"). InnerVolumeSpecName "kube-api-access-f55zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:10:59 crc kubenswrapper[4688]: I0930 18:10:59.790307 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55zt\" (UniqueName: \"kubernetes.io/projected/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-kube-api-access-f55zt\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.153471 4688 generic.go:334] "Generic (PLEG): container finished" podID="82afbeb4-8c01-47b4-a07c-b9d1fd405e39" containerID="e4bcb504f202fd7eebc2860bc86f90dd589fcbfa19c0e5a93780b733f4f03ec8" exitCode=0 Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.153617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" event={"ID":"82afbeb4-8c01-47b4-a07c-b9d1fd405e39","Type":"ContainerDied","Data":"e4bcb504f202fd7eebc2860bc86f90dd589fcbfa19c0e5a93780b733f4f03ec8"} Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.164625 4688 generic.go:334] "Generic (PLEG): container finished" podID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerID="8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785" exitCode=0 Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.164668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerDied","Data":"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785"} Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.164696 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-656qk" event={"ID":"4cf2a0fb-5a75-4e90-8e64-6af32452e4c5","Type":"ContainerDied","Data":"c70e2767dd6df6c98c7a04656286a028a18ebbe98b77d6e16c950a0ce2eda9aa"} Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.164715 4688 scope.go:117] "RemoveContainer" containerID="8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.164891 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-656qk" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.194037 4688 scope.go:117] "RemoveContainer" containerID="bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.221887 4688 scope.go:117] "RemoveContainer" containerID="b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.290113 4688 scope.go:117] "RemoveContainer" containerID="8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785" Sep 30 18:11:00 crc kubenswrapper[4688]: E0930 18:11:00.290459 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785\": container with ID starting with 8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785 not found: ID does not exist" containerID="8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.290504 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785"} err="failed to get container status \"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785\": rpc error: code = NotFound desc = could not find container \"8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785\": container with ID starting with 8dd9501fb89a3141916da7694d3e5791e21e2dcc1223dcf51fa6fb81a46ae785 not found: ID does not exist" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.290532 4688 scope.go:117] "RemoveContainer" containerID="bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb" Sep 30 18:11:00 crc kubenswrapper[4688]: E0930 18:11:00.291169 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb\": container with ID starting with bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb not found: ID does not exist" containerID="bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.291200 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb"} err="failed to get container status \"bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb\": rpc error: code = NotFound desc = could not find container \"bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb\": container with ID starting with bd24ff227154dfc98fc87abdd48c7371c5da019a23bfa23e92ab53367e7cbecb not found: ID does not exist" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.291217 4688 scope.go:117] "RemoveContainer" containerID="b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e" Sep 30 18:11:00 crc kubenswrapper[4688]: E0930 18:11:00.291496 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e\": container with ID starting with b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e not found: ID does not exist" containerID="b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e" Sep 30 18:11:00 crc kubenswrapper[4688]: I0930 18:11:00.291551 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e"} err="failed to get container status \"b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e\": rpc error: code = NotFound desc = could not find container \"b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e\": container with ID starting with b1152e0be17bc3ca7d4b25ea1ee033598d6bbd6b2d637be8f980e385c212305e not found: ID does not exist" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.273225 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" (UID: "4cf2a0fb-5a75-4e90-8e64-6af32452e4c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.325264 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.416095 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.424413 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-656qk"] Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.464209 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" path="/var/lib/kubelet/pods/4cf2a0fb-5a75-4e90-8e64-6af32452e4c5/volumes" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.645094 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.731838 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732088 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7crfs\" (UniqueName: \"kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732178 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732219 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732240 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.732257 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0\") pod \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\" (UID: \"82afbeb4-8c01-47b4-a07c-b9d1fd405e39\") " Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.737803 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.739370 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs" (OuterVolumeSpecName: "kube-api-access-7crfs") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "kube-api-access-7crfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.755883 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.759693 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.763040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.767347 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.777268 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory" (OuterVolumeSpecName: "inventory") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.779448 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.781655 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "82afbeb4-8c01-47b4-a07c-b9d1fd405e39" (UID: "82afbeb4-8c01-47b4-a07c-b9d1fd405e39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834740 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834782 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834798 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834810 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7crfs\" (UniqueName: \"kubernetes.io/projected/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-kube-api-access-7crfs\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834821 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834835 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834846 4688 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834859 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:01 crc kubenswrapper[4688]: I0930 18:11:01.834873 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82afbeb4-8c01-47b4-a07c-b9d1fd405e39-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.192112 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" event={"ID":"82afbeb4-8c01-47b4-a07c-b9d1fd405e39","Type":"ContainerDied","Data":"1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09"} Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.192159 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2d734d10d1b3575d3f89b58b84b9557fb2e8a7b8eee68c5fa75d9c8289ee09" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.192226 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62pl5" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.274556 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p"] Sep 30 18:11:02 crc kubenswrapper[4688]: E0930 18:11:02.274917 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="extract-utilities" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.274934 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="extract-utilities" Sep 30 18:11:02 crc kubenswrapper[4688]: E0930 18:11:02.274946 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="registry-server" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.274952 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="registry-server" Sep 30 18:11:02 crc kubenswrapper[4688]: E0930 18:11:02.274978 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82afbeb4-8c01-47b4-a07c-b9d1fd405e39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.274985 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="82afbeb4-8c01-47b4-a07c-b9d1fd405e39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 18:11:02 crc kubenswrapper[4688]: E0930 18:11:02.275007 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="extract-content" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.275014 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="extract-content" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.275179 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="82afbeb4-8c01-47b4-a07c-b9d1fd405e39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.275195 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf2a0fb-5a75-4e90-8e64-6af32452e4c5" containerName="registry-server" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.275910 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.278740 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.279055 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.279139 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.279835 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.281067 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7lpjr" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.288314 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p"] Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.342695 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.342794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6wh\" (UniqueName: \"kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.342955 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.343159 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.343191 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.343220 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.343242 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.444972 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6wh\" (UniqueName: \"kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445108 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445135 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.445232 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.452944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.453723 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.455095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.455126 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.462761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.462953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.476622 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6wh\" (UniqueName: \"kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:02 crc kubenswrapper[4688]: I0930 18:11:02.594874 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:11:03 crc kubenswrapper[4688]: I0930 18:11:03.181653 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p"] Sep 30 18:11:03 crc kubenswrapper[4688]: W0930 18:11:03.192896 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffd57ee0_1383_4acb_8892_b5d00ae71ea4.slice/crio-94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731 WatchSource:0}: Error finding container 94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731: Status 404 returned error can't find the container with id 94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731 Sep 30 18:11:04 crc kubenswrapper[4688]: I0930 18:11:04.211404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" event={"ID":"ffd57ee0-1383-4acb-8892-b5d00ae71ea4","Type":"ContainerStarted","Data":"7be30fcb08d254fa08de0840ee8e12d3286ce00d68a04034d2a6a75ba0ef8099"} Sep 30 18:11:04 crc kubenswrapper[4688]: I0930 18:11:04.211824 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" event={"ID":"ffd57ee0-1383-4acb-8892-b5d00ae71ea4","Type":"ContainerStarted","Data":"94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731"} Sep 30 18:11:04 crc kubenswrapper[4688]: I0930 18:11:04.236915 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" podStartSLOduration=1.7289850549999999 podStartE2EDuration="2.236894755s" podCreationTimestamp="2025-09-30 18:11:02 +0000 UTC" firstStartedPulling="2025-09-30 18:11:03.197938694 +0000 UTC m=+3320.493376252" lastFinishedPulling="2025-09-30 18:11:03.705848394 +0000 UTC m=+3321.001285952" observedRunningTime="2025-09-30 18:11:04.227658866 +0000 UTC m=+3321.523096444" watchObservedRunningTime="2025-09-30 18:11:04.236894755 +0000 UTC m=+3321.532332323" Sep 30 18:11:22 crc kubenswrapper[4688]: I0930 18:11:22.545475 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:11:22 crc kubenswrapper[4688]: I0930 18:11:22.546017 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:11:52 crc kubenswrapper[4688]: I0930 18:11:52.545841 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:11:52 crc kubenswrapper[4688]: I0930 18:11:52.546372 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:12:07 crc kubenswrapper[4688]: I0930 18:12:07.942054 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:07 crc kubenswrapper[4688]: I0930 18:12:07.945123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:07 crc kubenswrapper[4688]: I0930 18:12:07.953923 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.021806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2ss\" (UniqueName: \"kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.021862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.022148 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.124218 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2ss\" (UniqueName: \"kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.124277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.124348 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.124891 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.124900 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.148934 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2ss\" (UniqueName: \"kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss\") pod \"certified-operators-h4nkn\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.276385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:08 crc kubenswrapper[4688]: I0930 18:12:08.818422 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:09 crc kubenswrapper[4688]: I0930 18:12:09.812723 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb613-5289-4ee3-ab01-762034b72869" containerID="5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc" exitCode=0 Sep 30 18:12:09 crc kubenswrapper[4688]: I0930 18:12:09.813048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerDied","Data":"5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc"} Sep 30 18:12:09 crc kubenswrapper[4688]: I0930 18:12:09.813074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerStarted","Data":"5c4fc4c1ab9ffb54335328134ada940c8f5659d9e073e0329713313e93128232"} Sep 30 18:12:11 crc kubenswrapper[4688]: I0930 18:12:11.830712 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb613-5289-4ee3-ab01-762034b72869" containerID="adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534" exitCode=0 Sep 30 18:12:11 crc kubenswrapper[4688]: I0930 18:12:11.830928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerDied","Data":"adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534"} Sep 30 18:12:12 crc kubenswrapper[4688]: I0930 18:12:12.844131 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerStarted","Data":"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27"} Sep 30 18:12:12 crc kubenswrapper[4688]: I0930 18:12:12.868834 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4nkn" podStartSLOduration=3.437145293 podStartE2EDuration="5.868815392s" podCreationTimestamp="2025-09-30 18:12:07 +0000 UTC" firstStartedPulling="2025-09-30 18:12:09.814949471 +0000 UTC m=+3387.110387029" lastFinishedPulling="2025-09-30 18:12:12.24661957 +0000 UTC m=+3389.542057128" observedRunningTime="2025-09-30 18:12:12.86297068 +0000 UTC m=+3390.158408258" watchObservedRunningTime="2025-09-30 18:12:12.868815392 +0000 UTC m=+3390.164252940" Sep 30 18:12:18 crc kubenswrapper[4688]: I0930 18:12:18.277005 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:18 crc kubenswrapper[4688]: I0930 18:12:18.277482 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:18 crc kubenswrapper[4688]: I0930 18:12:18.334894 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:18 crc kubenswrapper[4688]: I0930 18:12:18.946357 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:18 crc kubenswrapper[4688]: I0930 18:12:18.998644 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:20 crc kubenswrapper[4688]: I0930 18:12:20.923193 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4nkn" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="registry-server" containerID="cri-o://eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27" gracePeriod=2 Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.363493 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.516374 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd2ss\" (UniqueName: \"kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss\") pod \"6cfdb613-5289-4ee3-ab01-762034b72869\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.516508 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content\") pod \"6cfdb613-5289-4ee3-ab01-762034b72869\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.516727 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities\") pod \"6cfdb613-5289-4ee3-ab01-762034b72869\" (UID: \"6cfdb613-5289-4ee3-ab01-762034b72869\") " Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.517951 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities" (OuterVolumeSpecName: "utilities") pod "6cfdb613-5289-4ee3-ab01-762034b72869" (UID: "6cfdb613-5289-4ee3-ab01-762034b72869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.522726 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss" (OuterVolumeSpecName: "kube-api-access-sd2ss") pod "6cfdb613-5289-4ee3-ab01-762034b72869" (UID: "6cfdb613-5289-4ee3-ab01-762034b72869"). InnerVolumeSpecName "kube-api-access-sd2ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.569206 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cfdb613-5289-4ee3-ab01-762034b72869" (UID: "6cfdb613-5289-4ee3-ab01-762034b72869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.619686 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.619737 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd2ss\" (UniqueName: \"kubernetes.io/projected/6cfdb613-5289-4ee3-ab01-762034b72869-kube-api-access-sd2ss\") on node \"crc\" DevicePath \"\"" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.619751 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfdb613-5289-4ee3-ab01-762034b72869-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.935501 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cfdb613-5289-4ee3-ab01-762034b72869" containerID="eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27" exitCode=0 Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.935595 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerDied","Data":"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27"} Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.935627 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4nkn" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.935679 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4nkn" event={"ID":"6cfdb613-5289-4ee3-ab01-762034b72869","Type":"ContainerDied","Data":"5c4fc4c1ab9ffb54335328134ada940c8f5659d9e073e0329713313e93128232"} Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.935701 4688 scope.go:117] "RemoveContainer" containerID="eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.959136 4688 scope.go:117] "RemoveContainer" containerID="adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534" Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.975381 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:21 crc kubenswrapper[4688]: I0930 18:12:21.989637 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4nkn"] Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.014689 4688 scope.go:117] "RemoveContainer" containerID="5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.050668 4688 scope.go:117] "RemoveContainer" containerID="eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27" Sep 30 18:12:22 crc kubenswrapper[4688]: E0930 18:12:22.051187 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27\": container with ID starting with eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27 not found: ID does not exist" containerID="eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.051218 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27"} err="failed to get container status \"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27\": rpc error: code = NotFound desc = could not find container \"eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27\": container with ID starting with eb58cfa0c098312d9d526d5ec066a05970fc392ff25e22c36b259f7a83fc2d27 not found: ID does not exist" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.051239 4688 scope.go:117] "RemoveContainer" containerID="adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534" Sep 30 18:12:22 crc kubenswrapper[4688]: E0930 18:12:22.051498 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534\": container with ID starting with adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534 not found: ID does not exist" containerID="adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.051636 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534"} err="failed to get container status \"adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534\": rpc error: code = NotFound desc = could not find container \"adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534\": container with ID starting with adb536de0063ff97e17017e0288165437f860cf3b963b718a3bf75fc55c26534 not found: ID does not exist" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.051737 4688 scope.go:117] "RemoveContainer" containerID="5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc" Sep 30 18:12:22 crc kubenswrapper[4688]: E0930 18:12:22.052251 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc\": container with ID starting with 5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc not found: ID does not exist" containerID="5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.052272 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc"} err="failed to get container status \"5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc\": rpc error: code = NotFound desc = could not find container \"5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc\": container with ID starting with 5915d7bd7abbfa874881fef642196b07ce650845ea77cfbf3506c9e2694905dc not found: ID does not exist" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.545660 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.545971 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.546023 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.546743 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.546842 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f" gracePeriod=600 Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.952184 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f" exitCode=0 Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.952264 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f"} Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.952369 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306"} Sep 30 18:12:22 crc kubenswrapper[4688]: I0930 18:12:22.952414 4688 scope.go:117] "RemoveContainer" containerID="4f6589226ab5d9ccdd7753dfdc981b42030d950cf9a62107ffbf4c77787d11c1" Sep 30 18:12:23 crc kubenswrapper[4688]: I0930 18:12:23.442967 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" path="/var/lib/kubelet/pods/6cfdb613-5289-4ee3-ab01-762034b72869/volumes" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.623783 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:10 crc kubenswrapper[4688]: E0930 18:13:10.624658 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="extract-content" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.624671 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="extract-content" Sep 30 18:13:10 crc kubenswrapper[4688]: E0930 18:13:10.624704 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="registry-server" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.624710 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="registry-server" Sep 30 18:13:10 crc kubenswrapper[4688]: E0930 18:13:10.624733 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="extract-utilities" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.624741 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="extract-utilities" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.624955 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfdb613-5289-4ee3-ab01-762034b72869" containerName="registry-server" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.626349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.637094 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.669428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.669540 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.670479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psgz\" (UniqueName: \"kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.772790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.772876 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.772916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psgz\" (UniqueName: \"kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.773852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.773951 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.797810 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psgz\" (UniqueName: \"kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz\") pod \"redhat-marketplace-s9vgn\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:10 crc kubenswrapper[4688]: I0930 18:13:10.963084 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:11 crc kubenswrapper[4688]: I0930 18:13:11.451146 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:12 crc kubenswrapper[4688]: I0930 18:13:12.419563 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerID="62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8" exitCode=0 Sep 30 18:13:12 crc kubenswrapper[4688]: I0930 18:13:12.419672 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerDied","Data":"62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8"} Sep 30 18:13:12 crc kubenswrapper[4688]: I0930 18:13:12.419929 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerStarted","Data":"f92c65f4149b048b3fd4b57c2a4541b56ac8b3a432397c65a9d406bfc4fc2b4a"} Sep 30 18:13:12 crc kubenswrapper[4688]: I0930 18:13:12.424271 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:13:13 crc kubenswrapper[4688]: I0930 18:13:13.431079 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerID="fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9" exitCode=0 Sep 30 18:13:13 crc kubenswrapper[4688]: I0930 18:13:13.439505 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerDied","Data":"fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9"} Sep 30 18:13:14 crc kubenswrapper[4688]: I0930 18:13:14.451971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerStarted","Data":"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c"} Sep 30 18:13:14 crc kubenswrapper[4688]: I0930 18:13:14.472419 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s9vgn" podStartSLOduration=3.015345 podStartE2EDuration="4.472396914s" podCreationTimestamp="2025-09-30 18:13:10 +0000 UTC" firstStartedPulling="2025-09-30 18:13:12.424037388 +0000 UTC m=+3449.719474946" lastFinishedPulling="2025-09-30 18:13:13.881089282 +0000 UTC m=+3451.176526860" observedRunningTime="2025-09-30 18:13:14.470387972 +0000 UTC m=+3451.765825550" watchObservedRunningTime="2025-09-30 18:13:14.472396914 +0000 UTC m=+3451.767834472" Sep 30 18:13:20 crc kubenswrapper[4688]: I0930 18:13:20.963204 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:20 crc kubenswrapper[4688]: I0930 18:13:20.964355 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:21 crc kubenswrapper[4688]: I0930 18:13:21.012504 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:21 crc kubenswrapper[4688]: I0930 18:13:21.568318 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:21 crc kubenswrapper[4688]: I0930 18:13:21.621826 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:23 crc kubenswrapper[4688]: I0930 18:13:23.536599 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s9vgn" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="registry-server" containerID="cri-o://4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c" gracePeriod=2 Sep 30 18:13:23 crc kubenswrapper[4688]: I0930 18:13:23.994091 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.023960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6psgz\" (UniqueName: \"kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz\") pod \"0b6dbd41-38b6-49c3-8893-593615d78c10\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.024043 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities\") pod \"0b6dbd41-38b6-49c3-8893-593615d78c10\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.024061 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content\") pod \"0b6dbd41-38b6-49c3-8893-593615d78c10\" (UID: \"0b6dbd41-38b6-49c3-8893-593615d78c10\") " Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.025036 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities" (OuterVolumeSpecName: "utilities") pod "0b6dbd41-38b6-49c3-8893-593615d78c10" (UID: "0b6dbd41-38b6-49c3-8893-593615d78c10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.032859 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz" (OuterVolumeSpecName: "kube-api-access-6psgz") pod "0b6dbd41-38b6-49c3-8893-593615d78c10" (UID: "0b6dbd41-38b6-49c3-8893-593615d78c10"). InnerVolumeSpecName "kube-api-access-6psgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.036677 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b6dbd41-38b6-49c3-8893-593615d78c10" (UID: "0b6dbd41-38b6-49c3-8893-593615d78c10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.126838 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6psgz\" (UniqueName: \"kubernetes.io/projected/0b6dbd41-38b6-49c3-8893-593615d78c10-kube-api-access-6psgz\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.126891 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.126905 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6dbd41-38b6-49c3-8893-593615d78c10-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.550360 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerID="4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c" exitCode=0 Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.550405 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerDied","Data":"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c"} Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.550432 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9vgn" event={"ID":"0b6dbd41-38b6-49c3-8893-593615d78c10","Type":"ContainerDied","Data":"f92c65f4149b048b3fd4b57c2a4541b56ac8b3a432397c65a9d406bfc4fc2b4a"} Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.550445 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9vgn" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.550451 4688 scope.go:117] "RemoveContainer" containerID="4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.568614 4688 scope.go:117] "RemoveContainer" containerID="fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.612703 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.618808 4688 scope.go:117] "RemoveContainer" containerID="62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.623740 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9vgn"] Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.646358 4688 scope.go:117] "RemoveContainer" containerID="4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c" Sep 30 18:13:24 crc kubenswrapper[4688]: E0930 18:13:24.647048 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c\": container with ID starting with 4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c not found: ID does not exist" containerID="4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.647097 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c"} err="failed to get container status \"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c\": rpc error: code = NotFound desc = could not find container \"4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c\": container with ID starting with 4ce723b9e4144a0604719fa838fc524f58577a8fb3447d3f7c88b5cd99cdf04c not found: ID does not exist" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.647126 4688 scope.go:117] "RemoveContainer" containerID="fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9" Sep 30 18:13:24 crc kubenswrapper[4688]: E0930 18:13:24.647741 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9\": container with ID starting with fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9 not found: ID does not exist" containerID="fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.647808 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9"} err="failed to get container status \"fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9\": rpc error: code = NotFound desc = could not find container \"fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9\": container with ID starting with fe4733c31dfefa4c0b793cf8c7fd577c1eb46626c445a446ca2214a4145b56d9 not found: ID does not exist" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.647856 4688 scope.go:117] "RemoveContainer" containerID="62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8" Sep 30 18:13:24 crc kubenswrapper[4688]: E0930 18:13:24.648475 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8\": container with ID starting with 62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8 not found: ID does not exist" containerID="62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8" Sep 30 18:13:24 crc kubenswrapper[4688]: I0930 18:13:24.648504 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8"} err="failed to get container status \"62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8\": rpc error: code = NotFound desc = could not find container \"62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8\": container with ID starting with 62f19a8157762c720d775c73ff7568b76c39c522c3e2ab232078048fef5e55e8 not found: ID does not exist" Sep 30 18:13:25 crc kubenswrapper[4688]: I0930 18:13:25.443674 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" path="/var/lib/kubelet/pods/0b6dbd41-38b6-49c3-8893-593615d78c10/volumes" Sep 30 18:13:51 crc kubenswrapper[4688]: I0930 18:13:51.833108 4688 generic.go:334] "Generic (PLEG): container finished" podID="ffd57ee0-1383-4acb-8892-b5d00ae71ea4" containerID="7be30fcb08d254fa08de0840ee8e12d3286ce00d68a04034d2a6a75ba0ef8099" exitCode=0 Sep 30 18:13:51 crc kubenswrapper[4688]: I0930 18:13:51.833193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" event={"ID":"ffd57ee0-1383-4acb-8892-b5d00ae71ea4","Type":"ContainerDied","Data":"7be30fcb08d254fa08de0840ee8e12d3286ce00d68a04034d2a6a75ba0ef8099"} Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.306421 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414006 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414075 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6wh\" (UniqueName: \"kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414183 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414240 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414282 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.414344 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2\") pod \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\" (UID: \"ffd57ee0-1383-4acb-8892-b5d00ae71ea4\") " Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.420680 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.421545 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh" (OuterVolumeSpecName: "kube-api-access-rf6wh") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "kube-api-access-rf6wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.448021 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.451673 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory" (OuterVolumeSpecName: "inventory") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.455161 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.457364 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.467732 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ffd57ee0-1383-4acb-8892-b5d00ae71ea4" (UID: "ffd57ee0-1383-4acb-8892-b5d00ae71ea4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.516900 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.516937 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.516950 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf6wh\" (UniqueName: \"kubernetes.io/projected/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-kube-api-access-rf6wh\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.516964 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.517015 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.517028 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.517038 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ffd57ee0-1383-4acb-8892-b5d00ae71ea4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.874113 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" event={"ID":"ffd57ee0-1383-4acb-8892-b5d00ae71ea4","Type":"ContainerDied","Data":"94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731"} Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.874167 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b1e658e0985932f2f7fdd87fb0cff51a25534fc86a09f2ae74218a1f34a731" Sep 30 18:13:53 crc kubenswrapper[4688]: I0930 18:13:53.874365 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p" Sep 30 18:14:22 crc kubenswrapper[4688]: I0930 18:14:22.545661 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:14:22 crc kubenswrapper[4688]: I0930 18:14:22.546191 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:14:52 crc kubenswrapper[4688]: I0930 18:14:52.545804 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:14:52 crc kubenswrapper[4688]: I0930 18:14:52.546403 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.159128 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw"] Sep 30 18:15:00 crc kubenswrapper[4688]: E0930 18:15:00.160391 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="registry-server" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160408 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="registry-server" Sep 30 18:15:00 crc kubenswrapper[4688]: E0930 18:15:00.160453 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="extract-content" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160464 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="extract-content" Sep 30 18:15:00 crc kubenswrapper[4688]: E0930 18:15:00.160476 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="extract-utilities" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160484 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="extract-utilities" Sep 30 18:15:00 crc kubenswrapper[4688]: E0930 18:15:00.160501 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd57ee0-1383-4acb-8892-b5d00ae71ea4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160510 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd57ee0-1383-4acb-8892-b5d00ae71ea4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160764 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6dbd41-38b6-49c3-8893-593615d78c10" containerName="registry-server" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.160800 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd57ee0-1383-4acb-8892-b5d00ae71ea4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.161619 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.164525 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.164774 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.168529 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw"] Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.251090 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.251247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.251379 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqb5\" (UniqueName: \"kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.352593 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.352656 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.352695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfqb5\" (UniqueName: \"kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.353918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.359043 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.375170 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfqb5\" (UniqueName: \"kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5\") pod \"collect-profiles-29320935-9gsrw\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.486389 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:00 crc kubenswrapper[4688]: I0930 18:15:00.917422 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw"] Sep 30 18:15:01 crc kubenswrapper[4688]: I0930 18:15:01.542456 4688 generic.go:334] "Generic (PLEG): container finished" podID="4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" containerID="fc478b94ddb43327625b96dba539d35efdb5e207e529118affa64820d72f2d4e" exitCode=0 Sep 30 18:15:01 crc kubenswrapper[4688]: I0930 18:15:01.542511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" event={"ID":"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6","Type":"ContainerDied","Data":"fc478b94ddb43327625b96dba539d35efdb5e207e529118affa64820d72f2d4e"} Sep 30 18:15:01 crc kubenswrapper[4688]: I0930 18:15:01.542580 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" event={"ID":"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6","Type":"ContainerStarted","Data":"754fcc3559376b187f56e00d74845b32910087821915a67ea1e70abf920ae138"} Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.848607 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.903466 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfqb5\" (UniqueName: \"kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5\") pod \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.903658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume\") pod \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.903762 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume\") pod \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\" (UID: \"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6\") " Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.904375 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" (UID: "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.904760 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.913315 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5" (OuterVolumeSpecName: "kube-api-access-tfqb5") pod "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" (UID: "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6"). InnerVolumeSpecName "kube-api-access-tfqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:15:02 crc kubenswrapper[4688]: I0930 18:15:02.913504 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" (UID: "4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.007266 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfqb5\" (UniqueName: \"kubernetes.io/projected/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-kube-api-access-tfqb5\") on node \"crc\" DevicePath \"\"" Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.007314 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.561798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" event={"ID":"4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6","Type":"ContainerDied","Data":"754fcc3559376b187f56e00d74845b32910087821915a67ea1e70abf920ae138"} Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.561849 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754fcc3559376b187f56e00d74845b32910087821915a67ea1e70abf920ae138" Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.561897 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw" Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.926048 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k"] Sep 30 18:15:03 crc kubenswrapper[4688]: I0930 18:15:03.936851 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320890-dk82k"] Sep 30 18:15:05 crc kubenswrapper[4688]: I0930 18:15:05.445383 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf29abb2-202c-41f6-89eb-e2a680ea4118" path="/var/lib/kubelet/pods/cf29abb2-202c-41f6-89eb-e2a680ea4118/volumes" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.545787 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.546332 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.546392 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.547224 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.547277 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" gracePeriod=600 Sep 30 18:15:22 crc kubenswrapper[4688]: E0930 18:15:22.673420 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.744724 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" exitCode=0 Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.744734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306"} Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.744968 4688 scope.go:117] "RemoveContainer" containerID="1358b5b8d9aa82b7d38b4bc85b0dbc7e68676d4b199293c0cddd66d63a316a5f" Sep 30 18:15:22 crc kubenswrapper[4688]: I0930 18:15:22.745718 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:15:22 crc kubenswrapper[4688]: E0930 18:15:22.746101 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:15:37 crc kubenswrapper[4688]: I0930 18:15:37.431090 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:15:37 crc kubenswrapper[4688]: E0930 18:15:37.432261 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:15:49 crc kubenswrapper[4688]: I0930 18:15:49.342641 4688 scope.go:117] "RemoveContainer" containerID="a51cf1683c878550895a14176c941238c7fed2d4bf3e3eba55611554022e2ffb" Sep 30 18:15:50 crc kubenswrapper[4688]: I0930 18:15:50.429706 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:15:50 crc kubenswrapper[4688]: E0930 18:15:50.430124 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:16:04 crc kubenswrapper[4688]: I0930 18:16:04.429606 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:16:04 crc kubenswrapper[4688]: E0930 18:16:04.430472 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:16:18 crc kubenswrapper[4688]: I0930 18:16:18.430563 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:16:18 crc kubenswrapper[4688]: E0930 18:16:18.431424 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:16:29 crc kubenswrapper[4688]: I0930 18:16:29.429929 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:16:29 crc kubenswrapper[4688]: E0930 18:16:29.430687 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.177556 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:41 crc kubenswrapper[4688]: E0930 18:16:41.178472 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" containerName="collect-profiles" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.178484 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" containerName="collect-profiles" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.178680 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" containerName="collect-profiles" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.180836 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.200253 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.244060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.244136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp59c\" (UniqueName: \"kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.244238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.346440 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.346506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp59c\" (UniqueName: \"kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.346596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.347083 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.347152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.372404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp59c\" (UniqueName: \"kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c\") pod \"community-operators-w4mjj\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:41 crc kubenswrapper[4688]: I0930 18:16:41.521750 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:42 crc kubenswrapper[4688]: I0930 18:16:42.020928 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:42 crc kubenswrapper[4688]: I0930 18:16:42.430247 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:16:42 crc kubenswrapper[4688]: E0930 18:16:42.430967 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:16:42 crc kubenswrapper[4688]: I0930 18:16:42.533244 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc5985a7-60a3-48bb-8135-a19156362323" containerID="4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a" exitCode=0 Sep 30 18:16:42 crc kubenswrapper[4688]: I0930 18:16:42.533291 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerDied","Data":"4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a"} Sep 30 18:16:42 crc kubenswrapper[4688]: I0930 18:16:42.533317 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerStarted","Data":"bea1cc89bb6d0017337909deab974f67b79de1862c1de2317c096d1061a16f8d"} Sep 30 18:16:44 crc kubenswrapper[4688]: I0930 18:16:44.554235 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc5985a7-60a3-48bb-8135-a19156362323" containerID="c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783" exitCode=0 Sep 30 18:16:44 crc kubenswrapper[4688]: I0930 18:16:44.554502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerDied","Data":"c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783"} Sep 30 18:16:45 crc kubenswrapper[4688]: I0930 18:16:45.567720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerStarted","Data":"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45"} Sep 30 18:16:45 crc kubenswrapper[4688]: I0930 18:16:45.601729 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w4mjj" podStartSLOduration=2.185876781 podStartE2EDuration="4.601706456s" podCreationTimestamp="2025-09-30 18:16:41 +0000 UTC" firstStartedPulling="2025-09-30 18:16:42.535492906 +0000 UTC m=+3659.830930464" lastFinishedPulling="2025-09-30 18:16:44.951322551 +0000 UTC m=+3662.246760139" observedRunningTime="2025-09-30 18:16:45.593111983 +0000 UTC m=+3662.888549561" watchObservedRunningTime="2025-09-30 18:16:45.601706456 +0000 UTC m=+3662.897144014" Sep 30 18:16:51 crc kubenswrapper[4688]: I0930 18:16:51.522071 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:51 crc kubenswrapper[4688]: I0930 18:16:51.522671 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:51 crc kubenswrapper[4688]: I0930 18:16:51.565933 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:51 crc kubenswrapper[4688]: I0930 18:16:51.675438 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:51 crc kubenswrapper[4688]: I0930 18:16:51.799278 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:53 crc kubenswrapper[4688]: I0930 18:16:53.661307 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w4mjj" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="registry-server" containerID="cri-o://baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45" gracePeriod=2 Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.178605 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.298967 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp59c\" (UniqueName: \"kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c\") pod \"fc5985a7-60a3-48bb-8135-a19156362323\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.299036 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content\") pod \"fc5985a7-60a3-48bb-8135-a19156362323\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.299071 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities\") pod \"fc5985a7-60a3-48bb-8135-a19156362323\" (UID: \"fc5985a7-60a3-48bb-8135-a19156362323\") " Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.300105 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities" (OuterVolumeSpecName: "utilities") pod "fc5985a7-60a3-48bb-8135-a19156362323" (UID: "fc5985a7-60a3-48bb-8135-a19156362323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.306072 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c" (OuterVolumeSpecName: "kube-api-access-pp59c") pod "fc5985a7-60a3-48bb-8135-a19156362323" (UID: "fc5985a7-60a3-48bb-8135-a19156362323"). InnerVolumeSpecName "kube-api-access-pp59c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.349744 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc5985a7-60a3-48bb-8135-a19156362323" (UID: "fc5985a7-60a3-48bb-8135-a19156362323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.401681 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp59c\" (UniqueName: \"kubernetes.io/projected/fc5985a7-60a3-48bb-8135-a19156362323-kube-api-access-pp59c\") on node \"crc\" DevicePath \"\"" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.401984 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.401996 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5985a7-60a3-48bb-8135-a19156362323-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.670382 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc5985a7-60a3-48bb-8135-a19156362323" containerID="baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45" exitCode=0 Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.670428 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerDied","Data":"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45"} Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.670435 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4mjj" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.670460 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4mjj" event={"ID":"fc5985a7-60a3-48bb-8135-a19156362323","Type":"ContainerDied","Data":"bea1cc89bb6d0017337909deab974f67b79de1862c1de2317c096d1061a16f8d"} Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.670482 4688 scope.go:117] "RemoveContainer" containerID="baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.691625 4688 scope.go:117] "RemoveContainer" containerID="c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.709875 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.719900 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w4mjj"] Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.732445 4688 scope.go:117] "RemoveContainer" containerID="4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.756139 4688 scope.go:117] "RemoveContainer" containerID="baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45" Sep 30 18:16:54 crc kubenswrapper[4688]: E0930 18:16:54.756657 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45\": container with ID starting with baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45 not found: ID does not exist" containerID="baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.756723 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45"} err="failed to get container status \"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45\": rpc error: code = NotFound desc = could not find container \"baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45\": container with ID starting with baa1d8aed53884c554f3af97de585abc140469ca34247b2c89d699a30c520e45 not found: ID does not exist" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.756748 4688 scope.go:117] "RemoveContainer" containerID="c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783" Sep 30 18:16:54 crc kubenswrapper[4688]: E0930 18:16:54.757069 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783\": container with ID starting with c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783 not found: ID does not exist" containerID="c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.757096 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783"} err="failed to get container status \"c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783\": rpc error: code = NotFound desc = could not find container \"c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783\": container with ID starting with c5a6fc350daa2172c19d0cdd37a9ad2302683ede36136dc9cf72a3e879ccd783 not found: ID does not exist" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.757112 4688 scope.go:117] "RemoveContainer" containerID="4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a" Sep 30 18:16:54 crc kubenswrapper[4688]: E0930 18:16:54.757414 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a\": container with ID starting with 4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a not found: ID does not exist" containerID="4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a" Sep 30 18:16:54 crc kubenswrapper[4688]: I0930 18:16:54.757443 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a"} err="failed to get container status \"4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a\": rpc error: code = NotFound desc = could not find container \"4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a\": container with ID starting with 4836b95d81f55716fd2f39322a6981d379eca90b476e6b10f352d227221e350a not found: ID does not exist" Sep 30 18:16:55 crc kubenswrapper[4688]: I0930 18:16:55.442731 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5985a7-60a3-48bb-8135-a19156362323" path="/var/lib/kubelet/pods/fc5985a7-60a3-48bb-8135-a19156362323/volumes" Sep 30 18:16:56 crc kubenswrapper[4688]: I0930 18:16:56.429903 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:16:56 crc kubenswrapper[4688]: E0930 18:16:56.430468 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:17:08 crc kubenswrapper[4688]: I0930 18:17:08.430180 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:17:08 crc kubenswrapper[4688]: E0930 18:17:08.431294 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:17:23 crc kubenswrapper[4688]: I0930 18:17:23.437350 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:17:23 crc kubenswrapper[4688]: E0930 18:17:23.438202 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:17:34 crc kubenswrapper[4688]: I0930 18:17:34.429267 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:17:34 crc kubenswrapper[4688]: E0930 18:17:34.430116 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:17:46 crc kubenswrapper[4688]: I0930 18:17:46.430757 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:17:46 crc kubenswrapper[4688]: E0930 18:17:46.432870 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:18:00 crc kubenswrapper[4688]: I0930 18:18:00.429900 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:18:00 crc kubenswrapper[4688]: E0930 18:18:00.430776 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:18:15 crc kubenswrapper[4688]: I0930 18:18:15.429143 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:18:15 crc kubenswrapper[4688]: E0930 18:18:15.429722 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:18:27 crc kubenswrapper[4688]: I0930 18:18:27.429562 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:18:27 crc kubenswrapper[4688]: E0930 18:18:27.430477 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:18:42 crc kubenswrapper[4688]: I0930 18:18:42.429408 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:18:42 crc kubenswrapper[4688]: E0930 18:18:42.430708 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:18:54 crc kubenswrapper[4688]: I0930 18:18:54.430066 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:18:54 crc kubenswrapper[4688]: E0930 18:18:54.431205 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:19:09 crc kubenswrapper[4688]: I0930 18:19:09.429777 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:19:09 crc kubenswrapper[4688]: E0930 18:19:09.430537 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:19:22 crc kubenswrapper[4688]: I0930 18:19:22.429494 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:19:22 crc kubenswrapper[4688]: E0930 18:19:22.430442 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:19:36 crc kubenswrapper[4688]: I0930 18:19:36.430014 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:19:36 crc kubenswrapper[4688]: E0930 18:19:36.430897 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:19:47 crc kubenswrapper[4688]: I0930 18:19:47.429022 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:19:47 crc kubenswrapper[4688]: E0930 18:19:47.429697 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:20:02 crc kubenswrapper[4688]: I0930 18:20:02.430287 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:20:02 crc kubenswrapper[4688]: E0930 18:20:02.431515 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:20:17 crc kubenswrapper[4688]: I0930 18:20:17.429885 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:20:17 crc kubenswrapper[4688]: E0930 18:20:17.430902 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:20:32 crc kubenswrapper[4688]: I0930 18:20:32.429867 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:20:32 crc kubenswrapper[4688]: I0930 18:20:32.736352 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1"} Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.915372 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:11 crc kubenswrapper[4688]: E0930 18:22:11.916427 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="extract-content" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.916449 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="extract-content" Sep 30 18:22:11 crc kubenswrapper[4688]: E0930 18:22:11.916467 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="registry-server" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.916475 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="registry-server" Sep 30 18:22:11 crc kubenswrapper[4688]: E0930 18:22:11.916511 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="extract-utilities" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.916521 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="extract-utilities" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.916776 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5985a7-60a3-48bb-8135-a19156362323" containerName="registry-server" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.918530 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.939879 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.968282 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8d84\" (UniqueName: \"kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.968479 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:11 crc kubenswrapper[4688]: I0930 18:22:11.968531 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.070889 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.070965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.071049 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8d84\" (UniqueName: \"kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.071508 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.071613 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.092645 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8d84\" (UniqueName: \"kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84\") pod \"redhat-operators-wl5zn\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.261948 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:12 crc kubenswrapper[4688]: I0930 18:22:12.726846 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:13 crc kubenswrapper[4688]: I0930 18:22:13.658508 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerID="d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4" exitCode=0 Sep 30 18:22:13 crc kubenswrapper[4688]: I0930 18:22:13.658590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerDied","Data":"d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4"} Sep 30 18:22:13 crc kubenswrapper[4688]: I0930 18:22:13.658797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerStarted","Data":"08fb4465ebda346b907157f259708081a5903263057434713477a264a0cf864f"} Sep 30 18:22:13 crc kubenswrapper[4688]: I0930 18:22:13.660112 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:22:14 crc kubenswrapper[4688]: I0930 18:22:14.669199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerStarted","Data":"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd"} Sep 30 18:22:16 crc kubenswrapper[4688]: I0930 18:22:16.701006 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerID="fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd" exitCode=0 Sep 30 18:22:16 crc kubenswrapper[4688]: I0930 18:22:16.701312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerDied","Data":"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd"} Sep 30 18:22:17 crc kubenswrapper[4688]: I0930 18:22:17.712993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerStarted","Data":"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7"} Sep 30 18:22:17 crc kubenswrapper[4688]: I0930 18:22:17.729900 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wl5zn" podStartSLOduration=3.22618869 podStartE2EDuration="6.729879078s" podCreationTimestamp="2025-09-30 18:22:11 +0000 UTC" firstStartedPulling="2025-09-30 18:22:13.659847504 +0000 UTC m=+3990.955285062" lastFinishedPulling="2025-09-30 18:22:17.163537842 +0000 UTC m=+3994.458975450" observedRunningTime="2025-09-30 18:22:17.729048676 +0000 UTC m=+3995.024486254" watchObservedRunningTime="2025-09-30 18:22:17.729879078 +0000 UTC m=+3995.025316636" Sep 30 18:22:22 crc kubenswrapper[4688]: I0930 18:22:22.263392 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:22 crc kubenswrapper[4688]: I0930 18:22:22.264226 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:23 crc kubenswrapper[4688]: I0930 18:22:23.311280 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wl5zn" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="registry-server" probeResult="failure" output=< Sep 30 18:22:23 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 18:22:23 crc kubenswrapper[4688]: > Sep 30 18:22:32 crc kubenswrapper[4688]: I0930 18:22:32.327149 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:32 crc kubenswrapper[4688]: I0930 18:22:32.378491 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:32 crc kubenswrapper[4688]: I0930 18:22:32.571881 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:33 crc kubenswrapper[4688]: I0930 18:22:33.863538 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wl5zn" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="registry-server" containerID="cri-o://092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7" gracePeriod=2 Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.418870 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.593436 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content\") pod \"fa0bee2f-4291-423c-a23d-2520df2f2e15\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.593549 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8d84\" (UniqueName: \"kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84\") pod \"fa0bee2f-4291-423c-a23d-2520df2f2e15\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.593639 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities\") pod \"fa0bee2f-4291-423c-a23d-2520df2f2e15\" (UID: \"fa0bee2f-4291-423c-a23d-2520df2f2e15\") " Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.598233 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities" (OuterVolumeSpecName: "utilities") pod "fa0bee2f-4291-423c-a23d-2520df2f2e15" (UID: "fa0bee2f-4291-423c-a23d-2520df2f2e15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.606284 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84" (OuterVolumeSpecName: "kube-api-access-b8d84") pod "fa0bee2f-4291-423c-a23d-2520df2f2e15" (UID: "fa0bee2f-4291-423c-a23d-2520df2f2e15"). InnerVolumeSpecName "kube-api-access-b8d84". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.689732 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa0bee2f-4291-423c-a23d-2520df2f2e15" (UID: "fa0bee2f-4291-423c-a23d-2520df2f2e15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.697126 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8d84\" (UniqueName: \"kubernetes.io/projected/fa0bee2f-4291-423c-a23d-2520df2f2e15-kube-api-access-b8d84\") on node \"crc\" DevicePath \"\"" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.697189 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.697211 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa0bee2f-4291-423c-a23d-2520df2f2e15-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.876125 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerID="092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7" exitCode=0 Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.876168 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerDied","Data":"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7"} Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.876216 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl5zn" event={"ID":"fa0bee2f-4291-423c-a23d-2520df2f2e15","Type":"ContainerDied","Data":"08fb4465ebda346b907157f259708081a5903263057434713477a264a0cf864f"} Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.876242 4688 scope.go:117] "RemoveContainer" containerID="092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.876244 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl5zn" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.904809 4688 scope.go:117] "RemoveContainer" containerID="fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.922757 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.929338 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wl5zn"] Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.941751 4688 scope.go:117] "RemoveContainer" containerID="d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.976359 4688 scope.go:117] "RemoveContainer" containerID="092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7" Sep 30 18:22:34 crc kubenswrapper[4688]: E0930 18:22:34.979107 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7\": container with ID starting with 092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7 not found: ID does not exist" containerID="092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.979145 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7"} err="failed to get container status \"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7\": rpc error: code = NotFound desc = could not find container \"092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7\": container with ID starting with 092e2135e9934ad1a6fe351cf55dc89ffa3773987d69225232d437ec439aa5b7 not found: ID does not exist" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.979166 4688 scope.go:117] "RemoveContainer" containerID="fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd" Sep 30 18:22:34 crc kubenswrapper[4688]: E0930 18:22:34.979563 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd\": container with ID starting with fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd not found: ID does not exist" containerID="fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.979665 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd"} err="failed to get container status \"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd\": rpc error: code = NotFound desc = could not find container \"fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd\": container with ID starting with fd4e7522d16b668e20c030080ee02dd959db0a1c7287713397834636492915fd not found: ID does not exist" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.979682 4688 scope.go:117] "RemoveContainer" containerID="d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4" Sep 30 18:22:34 crc kubenswrapper[4688]: E0930 18:22:34.980728 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4\": container with ID starting with d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4 not found: ID does not exist" containerID="d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4" Sep 30 18:22:34 crc kubenswrapper[4688]: I0930 18:22:34.980759 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4"} err="failed to get container status \"d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4\": rpc error: code = NotFound desc = could not find container \"d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4\": container with ID starting with d8bbeb3a09eb82765937e206b3ae2299c3c26219184dad7f2dea2935d575ead4 not found: ID does not exist" Sep 30 18:22:35 crc kubenswrapper[4688]: I0930 18:22:35.449167 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" path="/var/lib/kubelet/pods/fa0bee2f-4291-423c-a23d-2520df2f2e15/volumes" Sep 30 18:22:40 crc kubenswrapper[4688]: I0930 18:22:40.724265 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-647757d8c5-6pzkp" podUID="62277faa-c7c7-4151-b35f-f3ebe78d3ffe" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Sep 30 18:22:52 crc kubenswrapper[4688]: I0930 18:22:52.546069 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:22:52 crc kubenswrapper[4688]: I0930 18:22:52.547148 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:23:22 crc kubenswrapper[4688]: I0930 18:23:22.546056 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:23:22 crc kubenswrapper[4688]: I0930 18:23:22.546631 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.978149 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:26 crc kubenswrapper[4688]: E0930 18:23:26.979145 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="registry-server" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.979163 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="registry-server" Sep 30 18:23:26 crc kubenswrapper[4688]: E0930 18:23:26.979188 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="extract-utilities" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.979196 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="extract-utilities" Sep 30 18:23:26 crc kubenswrapper[4688]: E0930 18:23:26.979213 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="extract-content" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.979219 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="extract-content" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.979431 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0bee2f-4291-423c-a23d-2520df2f2e15" containerName="registry-server" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.980768 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:26 crc kubenswrapper[4688]: I0930 18:23:26.996731 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.049018 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.049094 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.049190 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdvp\" (UniqueName: \"kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.150603 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.150742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdvp\" (UniqueName: \"kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.150853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.151396 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.151535 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.172361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdvp\" (UniqueName: \"kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp\") pod \"certified-operators-8pbv8\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.341732 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:27 crc kubenswrapper[4688]: I0930 18:23:27.822706 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:28 crc kubenswrapper[4688]: I0930 18:23:28.416123 4688 generic.go:334] "Generic (PLEG): container finished" podID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerID="5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78" exitCode=0 Sep 30 18:23:28 crc kubenswrapper[4688]: I0930 18:23:28.416249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerDied","Data":"5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78"} Sep 30 18:23:28 crc kubenswrapper[4688]: I0930 18:23:28.417456 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerStarted","Data":"377c4752a3d1ef5d4e59d5739e0eb9ea5255df414295ca74e79c3503757a3e69"} Sep 30 18:23:29 crc kubenswrapper[4688]: I0930 18:23:29.451006 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerStarted","Data":"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8"} Sep 30 18:23:30 crc kubenswrapper[4688]: I0930 18:23:30.440221 4688 generic.go:334] "Generic (PLEG): container finished" podID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerID="572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8" exitCode=0 Sep 30 18:23:30 crc kubenswrapper[4688]: I0930 18:23:30.440404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerDied","Data":"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8"} Sep 30 18:23:31 crc kubenswrapper[4688]: I0930 18:23:31.452598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerStarted","Data":"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39"} Sep 30 18:23:31 crc kubenswrapper[4688]: I0930 18:23:31.476963 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8pbv8" podStartSLOduration=2.956811048 podStartE2EDuration="5.476945896s" podCreationTimestamp="2025-09-30 18:23:26 +0000 UTC" firstStartedPulling="2025-09-30 18:23:28.418706703 +0000 UTC m=+4065.714144261" lastFinishedPulling="2025-09-30 18:23:30.938841551 +0000 UTC m=+4068.234279109" observedRunningTime="2025-09-30 18:23:31.472146982 +0000 UTC m=+4068.767584570" watchObservedRunningTime="2025-09-30 18:23:31.476945896 +0000 UTC m=+4068.772383454" Sep 30 18:23:37 crc kubenswrapper[4688]: I0930 18:23:37.342766 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:37 crc kubenswrapper[4688]: I0930 18:23:37.345033 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:37 crc kubenswrapper[4688]: I0930 18:23:37.414722 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:37 crc kubenswrapper[4688]: I0930 18:23:37.554057 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:37 crc kubenswrapper[4688]: I0930 18:23:37.647428 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:39 crc kubenswrapper[4688]: I0930 18:23:39.527192 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8pbv8" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="registry-server" containerID="cri-o://9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39" gracePeriod=2 Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.195494 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.350211 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtdvp\" (UniqueName: \"kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp\") pod \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.350457 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities\") pod \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.351660 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities" (OuterVolumeSpecName: "utilities") pod "b9b0892b-7ef0-4d40-8e42-ce6d37450b27" (UID: "b9b0892b-7ef0-4d40-8e42-ce6d37450b27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.351809 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content\") pod \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\" (UID: \"b9b0892b-7ef0-4d40-8e42-ce6d37450b27\") " Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.352609 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.366894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp" (OuterVolumeSpecName: "kube-api-access-mtdvp") pod "b9b0892b-7ef0-4d40-8e42-ce6d37450b27" (UID: "b9b0892b-7ef0-4d40-8e42-ce6d37450b27"). InnerVolumeSpecName "kube-api-access-mtdvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.417850 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9b0892b-7ef0-4d40-8e42-ce6d37450b27" (UID: "b9b0892b-7ef0-4d40-8e42-ce6d37450b27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.454294 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.454351 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtdvp\" (UniqueName: \"kubernetes.io/projected/b9b0892b-7ef0-4d40-8e42-ce6d37450b27-kube-api-access-mtdvp\") on node \"crc\" DevicePath \"\"" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.574933 4688 generic.go:334] "Generic (PLEG): container finished" podID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerID="9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39" exitCode=0 Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.574984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerDied","Data":"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39"} Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.575013 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pbv8" event={"ID":"b9b0892b-7ef0-4d40-8e42-ce6d37450b27","Type":"ContainerDied","Data":"377c4752a3d1ef5d4e59d5739e0eb9ea5255df414295ca74e79c3503757a3e69"} Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.575033 4688 scope.go:117] "RemoveContainer" containerID="9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.575033 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pbv8" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.619636 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.619649 4688 scope.go:117] "RemoveContainer" containerID="572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.627665 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8pbv8"] Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.646059 4688 scope.go:117] "RemoveContainer" containerID="5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.691543 4688 scope.go:117] "RemoveContainer" containerID="9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39" Sep 30 18:23:40 crc kubenswrapper[4688]: E0930 18:23:40.692878 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39\": container with ID starting with 9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39 not found: ID does not exist" containerID="9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.693038 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39"} err="failed to get container status \"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39\": rpc error: code = NotFound desc = could not find container \"9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39\": container with ID starting with 9f2657e1724fd3db562a1caffe04e5143e473cf730a2f9348d33d72382218e39 not found: ID does not exist" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.693201 4688 scope.go:117] "RemoveContainer" containerID="572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8" Sep 30 18:23:40 crc kubenswrapper[4688]: E0930 18:23:40.694258 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8\": container with ID starting with 572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8 not found: ID does not exist" containerID="572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.694363 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8"} err="failed to get container status \"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8\": rpc error: code = NotFound desc = could not find container \"572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8\": container with ID starting with 572fa4d760a34792252e5e14f6d349a2cf7a1ca84de9c771cd4f71f7e31d8bd8 not found: ID does not exist" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.694449 4688 scope.go:117] "RemoveContainer" containerID="5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78" Sep 30 18:23:40 crc kubenswrapper[4688]: E0930 18:23:40.695199 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78\": container with ID starting with 5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78 not found: ID does not exist" containerID="5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78" Sep 30 18:23:40 crc kubenswrapper[4688]: I0930 18:23:40.695246 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78"} err="failed to get container status \"5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78\": rpc error: code = NotFound desc = could not find container \"5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78\": container with ID starting with 5b6c199e0125158fcd735f8bb0703173f22b52f9eb3fb9e2744a40fdaa1d5d78 not found: ID does not exist" Sep 30 18:23:41 crc kubenswrapper[4688]: I0930 18:23:41.442179 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" path="/var/lib/kubelet/pods/b9b0892b-7ef0-4d40-8e42-ce6d37450b27/volumes" Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.545231 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.546664 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.546794 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.547683 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.547847 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1" gracePeriod=600 Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.690070 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1" exitCode=0 Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.690253 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1"} Sep 30 18:23:52 crc kubenswrapper[4688]: I0930 18:23:52.690376 4688 scope.go:117] "RemoveContainer" containerID="8cafee7c6db6f9a2e573bbc59853aadef3db03f04ec7f33beffad38275bd6306" Sep 30 18:23:53 crc kubenswrapper[4688]: I0930 18:23:53.699464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621"} Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.602820 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:00 crc kubenswrapper[4688]: E0930 18:24:00.603923 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="registry-server" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.603940 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="registry-server" Sep 30 18:24:00 crc kubenswrapper[4688]: E0930 18:24:00.603953 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="extract-content" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.603962 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="extract-content" Sep 30 18:24:00 crc kubenswrapper[4688]: E0930 18:24:00.604001 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="extract-utilities" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.604010 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="extract-utilities" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.604292 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b0892b-7ef0-4d40-8e42-ce6d37450b27" containerName="registry-server" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.606491 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.617469 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.790230 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.790423 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.790485 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psdm7\" (UniqueName: \"kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.892217 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.892597 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psdm7\" (UniqueName: \"kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.892669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.892956 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.893189 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.917405 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psdm7\" (UniqueName: \"kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7\") pod \"redhat-marketplace-dk28r\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:00 crc kubenswrapper[4688]: I0930 18:24:00.923394 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:01 crc kubenswrapper[4688]: I0930 18:24:01.378624 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:01 crc kubenswrapper[4688]: I0930 18:24:01.779485 4688 generic.go:334] "Generic (PLEG): container finished" podID="904659df-6acf-4547-bbab-088dc2ab7040" containerID="a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded" exitCode=0 Sep 30 18:24:01 crc kubenswrapper[4688]: I0930 18:24:01.779593 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerDied","Data":"a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded"} Sep 30 18:24:01 crc kubenswrapper[4688]: I0930 18:24:01.779739 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerStarted","Data":"8517cdc74dc67bbf709e4f346250f7bcabfbce5979d480b854e790c2d3e981fe"} Sep 30 18:24:03 crc kubenswrapper[4688]: I0930 18:24:03.801064 4688 generic.go:334] "Generic (PLEG): container finished" podID="904659df-6acf-4547-bbab-088dc2ab7040" containerID="286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9" exitCode=0 Sep 30 18:24:03 crc kubenswrapper[4688]: I0930 18:24:03.801232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerDied","Data":"286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9"} Sep 30 18:24:04 crc kubenswrapper[4688]: I0930 18:24:04.814283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerStarted","Data":"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606"} Sep 30 18:24:04 crc kubenswrapper[4688]: I0930 18:24:04.835829 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dk28r" podStartSLOduration=2.396618976 podStartE2EDuration="4.835808078s" podCreationTimestamp="2025-09-30 18:24:00 +0000 UTC" firstStartedPulling="2025-09-30 18:24:01.78174638 +0000 UTC m=+4099.077183938" lastFinishedPulling="2025-09-30 18:24:04.220935482 +0000 UTC m=+4101.516373040" observedRunningTime="2025-09-30 18:24:04.83320403 +0000 UTC m=+4102.128641598" watchObservedRunningTime="2025-09-30 18:24:04.835808078 +0000 UTC m=+4102.131245636" Sep 30 18:24:10 crc kubenswrapper[4688]: I0930 18:24:10.924620 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:10 crc kubenswrapper[4688]: I0930 18:24:10.925227 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:10 crc kubenswrapper[4688]: I0930 18:24:10.970492 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:11 crc kubenswrapper[4688]: I0930 18:24:11.943431 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:11 crc kubenswrapper[4688]: I0930 18:24:11.987461 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:13 crc kubenswrapper[4688]: I0930 18:24:13.894049 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dk28r" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="registry-server" containerID="cri-o://d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606" gracePeriod=2 Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.524562 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.644991 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psdm7\" (UniqueName: \"kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7\") pod \"904659df-6acf-4547-bbab-088dc2ab7040\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.645309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content\") pod \"904659df-6acf-4547-bbab-088dc2ab7040\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.645515 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities\") pod \"904659df-6acf-4547-bbab-088dc2ab7040\" (UID: \"904659df-6acf-4547-bbab-088dc2ab7040\") " Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.646182 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities" (OuterVolumeSpecName: "utilities") pod "904659df-6acf-4547-bbab-088dc2ab7040" (UID: "904659df-6acf-4547-bbab-088dc2ab7040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.654836 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7" (OuterVolumeSpecName: "kube-api-access-psdm7") pod "904659df-6acf-4547-bbab-088dc2ab7040" (UID: "904659df-6acf-4547-bbab-088dc2ab7040"). InnerVolumeSpecName "kube-api-access-psdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.662217 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "904659df-6acf-4547-bbab-088dc2ab7040" (UID: "904659df-6acf-4547-bbab-088dc2ab7040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.747332 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.747378 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psdm7\" (UniqueName: \"kubernetes.io/projected/904659df-6acf-4547-bbab-088dc2ab7040-kube-api-access-psdm7\") on node \"crc\" DevicePath \"\"" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.747392 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904659df-6acf-4547-bbab-088dc2ab7040-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.907657 4688 generic.go:334] "Generic (PLEG): container finished" podID="904659df-6acf-4547-bbab-088dc2ab7040" containerID="d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606" exitCode=0 Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.907984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerDied","Data":"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606"} Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.908023 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk28r" event={"ID":"904659df-6acf-4547-bbab-088dc2ab7040","Type":"ContainerDied","Data":"8517cdc74dc67bbf709e4f346250f7bcabfbce5979d480b854e790c2d3e981fe"} Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.908050 4688 scope.go:117] "RemoveContainer" containerID="d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.908311 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk28r" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.943166 4688 scope.go:117] "RemoveContainer" containerID="286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9" Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.960117 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:14 crc kubenswrapper[4688]: I0930 18:24:14.973771 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk28r"] Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.004730 4688 scope.go:117] "RemoveContainer" containerID="a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.030949 4688 scope.go:117] "RemoveContainer" containerID="d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606" Sep 30 18:24:15 crc kubenswrapper[4688]: E0930 18:24:15.034203 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606\": container with ID starting with d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606 not found: ID does not exist" containerID="d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.034250 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606"} err="failed to get container status \"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606\": rpc error: code = NotFound desc = could not find container \"d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606\": container with ID starting with d47a086070bf7ef2582d5dd9927862dc9de97140627306359dcc88e685215606 not found: ID does not exist" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.034278 4688 scope.go:117] "RemoveContainer" containerID="286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9" Sep 30 18:24:15 crc kubenswrapper[4688]: E0930 18:24:15.038158 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9\": container with ID starting with 286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9 not found: ID does not exist" containerID="286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.038207 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9"} err="failed to get container status \"286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9\": rpc error: code = NotFound desc = could not find container \"286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9\": container with ID starting with 286cd9bae9aed91fae7e172245344737ea52e81fd79aa4d98a05072d11360bc9 not found: ID does not exist" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.038235 4688 scope.go:117] "RemoveContainer" containerID="a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded" Sep 30 18:24:15 crc kubenswrapper[4688]: E0930 18:24:15.041900 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded\": container with ID starting with a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded not found: ID does not exist" containerID="a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.041948 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded"} err="failed to get container status \"a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded\": rpc error: code = NotFound desc = could not find container \"a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded\": container with ID starting with a0fc9ec91738448cfe3d95c6d08038c04465b398d866fc69340562ccc62e1ded not found: ID does not exist" Sep 30 18:24:15 crc kubenswrapper[4688]: I0930 18:24:15.444477 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904659df-6acf-4547-bbab-088dc2ab7040" path="/var/lib/kubelet/pods/904659df-6acf-4547-bbab-088dc2ab7040/volumes" Sep 30 18:25:52 crc kubenswrapper[4688]: I0930 18:25:52.546261 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:25:52 crc kubenswrapper[4688]: I0930 18:25:52.546898 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:26:22 crc kubenswrapper[4688]: I0930 18:26:22.545945 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:26:22 crc kubenswrapper[4688]: I0930 18:26:22.547762 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:26:52 crc kubenswrapper[4688]: I0930 18:26:52.545781 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:26:52 crc kubenswrapper[4688]: I0930 18:26:52.546421 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:26:52 crc kubenswrapper[4688]: I0930 18:26:52.546481 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:26:52 crc kubenswrapper[4688]: I0930 18:26:52.547426 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:26:52 crc kubenswrapper[4688]: I0930 18:26:52.547497 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" gracePeriod=600 Sep 30 18:26:52 crc kubenswrapper[4688]: E0930 18:26:52.674040 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:26:53 crc kubenswrapper[4688]: I0930 18:26:53.447154 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" exitCode=0 Sep 30 18:26:53 crc kubenswrapper[4688]: I0930 18:26:53.447215 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621"} Sep 30 18:26:53 crc kubenswrapper[4688]: I0930 18:26:53.447480 4688 scope.go:117] "RemoveContainer" containerID="2c7e7ddac02b326559d82d8d741afa76a2cd1af2723d410aefea8840d554eed1" Sep 30 18:26:53 crc kubenswrapper[4688]: I0930 18:26:53.447984 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:26:53 crc kubenswrapper[4688]: E0930 18:26:53.448286 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.129026 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:03 crc kubenswrapper[4688]: E0930 18:27:03.130216 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="extract-content" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.130231 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="extract-content" Sep 30 18:27:03 crc kubenswrapper[4688]: E0930 18:27:03.130256 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="extract-utilities" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.130264 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="extract-utilities" Sep 30 18:27:03 crc kubenswrapper[4688]: E0930 18:27:03.130286 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="registry-server" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.130292 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="registry-server" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.130488 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="904659df-6acf-4547-bbab-088dc2ab7040" containerName="registry-server" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.132369 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.146294 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.242078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrrf\" (UniqueName: \"kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.242480 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.242626 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.345244 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.345406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrrf\" (UniqueName: \"kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.345483 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.345913 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.345954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.387559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrrf\" (UniqueName: \"kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf\") pod \"community-operators-9gf5l\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:03 crc kubenswrapper[4688]: I0930 18:27:03.470529 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:04 crc kubenswrapper[4688]: I0930 18:27:04.059183 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:04 crc kubenswrapper[4688]: I0930 18:27:04.429623 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:27:04 crc kubenswrapper[4688]: E0930 18:27:04.430172 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:27:04 crc kubenswrapper[4688]: I0930 18:27:04.560889 4688 generic.go:334] "Generic (PLEG): container finished" podID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerID="ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740" exitCode=0 Sep 30 18:27:04 crc kubenswrapper[4688]: I0930 18:27:04.560959 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerDied","Data":"ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740"} Sep 30 18:27:04 crc kubenswrapper[4688]: I0930 18:27:04.561007 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerStarted","Data":"fc6da5f3c920a61589751d8bb48cd2e04a6cee7a530a309f52967e62e1b78aac"} Sep 30 18:27:05 crc kubenswrapper[4688]: I0930 18:27:05.570488 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerStarted","Data":"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025"} Sep 30 18:27:06 crc kubenswrapper[4688]: I0930 18:27:06.580441 4688 generic.go:334] "Generic (PLEG): container finished" podID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerID="6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025" exitCode=0 Sep 30 18:27:06 crc kubenswrapper[4688]: I0930 18:27:06.580544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerDied","Data":"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025"} Sep 30 18:27:07 crc kubenswrapper[4688]: I0930 18:27:07.591474 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerStarted","Data":"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387"} Sep 30 18:27:07 crc kubenswrapper[4688]: I0930 18:27:07.615072 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gf5l" podStartSLOduration=2.166481305 podStartE2EDuration="4.61505134s" podCreationTimestamp="2025-09-30 18:27:03 +0000 UTC" firstStartedPulling="2025-09-30 18:27:04.56245073 +0000 UTC m=+4281.857888288" lastFinishedPulling="2025-09-30 18:27:07.011020765 +0000 UTC m=+4284.306458323" observedRunningTime="2025-09-30 18:27:07.608404328 +0000 UTC m=+4284.903841896" watchObservedRunningTime="2025-09-30 18:27:07.61505134 +0000 UTC m=+4284.910488898" Sep 30 18:27:13 crc kubenswrapper[4688]: I0930 18:27:13.471402 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:13 crc kubenswrapper[4688]: I0930 18:27:13.472011 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:13 crc kubenswrapper[4688]: I0930 18:27:13.536464 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:13 crc kubenswrapper[4688]: I0930 18:27:13.694906 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:13 crc kubenswrapper[4688]: I0930 18:27:13.770596 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:15 crc kubenswrapper[4688]: I0930 18:27:15.430283 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:27:15 crc kubenswrapper[4688]: E0930 18:27:15.431271 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:27:15 crc kubenswrapper[4688]: I0930 18:27:15.667979 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gf5l" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="registry-server" containerID="cri-o://66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387" gracePeriod=2 Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.328849 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.335711 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content\") pod \"3bfa954b-4088-4b68-adf7-f05004bbec41\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.335879 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities\") pod \"3bfa954b-4088-4b68-adf7-f05004bbec41\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.335968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrrf\" (UniqueName: \"kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf\") pod \"3bfa954b-4088-4b68-adf7-f05004bbec41\" (UID: \"3bfa954b-4088-4b68-adf7-f05004bbec41\") " Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.337339 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities" (OuterVolumeSpecName: "utilities") pod "3bfa954b-4088-4b68-adf7-f05004bbec41" (UID: "3bfa954b-4088-4b68-adf7-f05004bbec41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.345045 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf" (OuterVolumeSpecName: "kube-api-access-xfrrf") pod "3bfa954b-4088-4b68-adf7-f05004bbec41" (UID: "3bfa954b-4088-4b68-adf7-f05004bbec41"). InnerVolumeSpecName "kube-api-access-xfrrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.407218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bfa954b-4088-4b68-adf7-f05004bbec41" (UID: "3bfa954b-4088-4b68-adf7-f05004bbec41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.438304 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.438342 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrrf\" (UniqueName: \"kubernetes.io/projected/3bfa954b-4088-4b68-adf7-f05004bbec41-kube-api-access-xfrrf\") on node \"crc\" DevicePath \"\"" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.438357 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bfa954b-4088-4b68-adf7-f05004bbec41-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.680536 4688 generic.go:334] "Generic (PLEG): container finished" podID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerID="66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387" exitCode=0 Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.680660 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gf5l" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.680731 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerDied","Data":"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387"} Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.680797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gf5l" event={"ID":"3bfa954b-4088-4b68-adf7-f05004bbec41","Type":"ContainerDied","Data":"fc6da5f3c920a61589751d8bb48cd2e04a6cee7a530a309f52967e62e1b78aac"} Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.680835 4688 scope.go:117] "RemoveContainer" containerID="66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387" Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.720101 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.728868 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gf5l"] Sep 30 18:27:16 crc kubenswrapper[4688]: I0930 18:27:16.741854 4688 scope.go:117] "RemoveContainer" containerID="6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.201625 4688 scope.go:117] "RemoveContainer" containerID="ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.279293 4688 scope.go:117] "RemoveContainer" containerID="66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387" Sep 30 18:27:17 crc kubenswrapper[4688]: E0930 18:27:17.279767 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387\": container with ID starting with 66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387 not found: ID does not exist" containerID="66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.279812 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387"} err="failed to get container status \"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387\": rpc error: code = NotFound desc = could not find container \"66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387\": container with ID starting with 66b9e65a90c405e9803f202809b9060bd3c55f7d238e80ee9c5667af127eb387 not found: ID does not exist" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.279928 4688 scope.go:117] "RemoveContainer" containerID="6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025" Sep 30 18:27:17 crc kubenswrapper[4688]: E0930 18:27:17.280233 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025\": container with ID starting with 6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025 not found: ID does not exist" containerID="6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.280288 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025"} err="failed to get container status \"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025\": rpc error: code = NotFound desc = could not find container \"6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025\": container with ID starting with 6118d9b9e74c0a36d01ba2d53099b4d7c032605197f6acad0246f8bfb946a025 not found: ID does not exist" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.280310 4688 scope.go:117] "RemoveContainer" containerID="ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740" Sep 30 18:27:17 crc kubenswrapper[4688]: E0930 18:27:17.280879 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740\": container with ID starting with ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740 not found: ID does not exist" containerID="ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.280910 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740"} err="failed to get container status \"ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740\": rpc error: code = NotFound desc = could not find container \"ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740\": container with ID starting with ec0f878353ce099b62e9cc11117bbd4f5baefcf1a93784ba35de4f8bc644c740 not found: ID does not exist" Sep 30 18:27:17 crc kubenswrapper[4688]: I0930 18:27:17.440715 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" path="/var/lib/kubelet/pods/3bfa954b-4088-4b68-adf7-f05004bbec41/volumes" Sep 30 18:27:30 crc kubenswrapper[4688]: I0930 18:27:30.429691 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:27:30 crc kubenswrapper[4688]: E0930 18:27:30.430941 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:27:44 crc kubenswrapper[4688]: I0930 18:27:44.429365 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:27:44 crc kubenswrapper[4688]: E0930 18:27:44.430497 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:27:55 crc kubenswrapper[4688]: I0930 18:27:55.430145 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:27:55 crc kubenswrapper[4688]: E0930 18:27:55.432017 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:28:08 crc kubenswrapper[4688]: I0930 18:28:08.429749 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:28:08 crc kubenswrapper[4688]: E0930 18:28:08.430440 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:28:20 crc kubenswrapper[4688]: I0930 18:28:20.430319 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:28:20 crc kubenswrapper[4688]: E0930 18:28:20.431046 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:28:32 crc kubenswrapper[4688]: I0930 18:28:32.429951 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:28:32 crc kubenswrapper[4688]: E0930 18:28:32.430791 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:28:44 crc kubenswrapper[4688]: I0930 18:28:44.428911 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:28:44 crc kubenswrapper[4688]: E0930 18:28:44.429732 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:28:55 crc kubenswrapper[4688]: I0930 18:28:55.430000 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:28:55 crc kubenswrapper[4688]: E0930 18:28:55.430901 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:29:09 crc kubenswrapper[4688]: I0930 18:29:09.429996 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:29:09 crc kubenswrapper[4688]: E0930 18:29:09.430837 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:29:24 crc kubenswrapper[4688]: I0930 18:29:24.429394 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:29:24 crc kubenswrapper[4688]: E0930 18:29:24.430327 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:29:39 crc kubenswrapper[4688]: I0930 18:29:39.430325 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:29:39 crc kubenswrapper[4688]: E0930 18:29:39.431405 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:29:50 crc kubenswrapper[4688]: I0930 18:29:50.430752 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:29:50 crc kubenswrapper[4688]: E0930 18:29:50.432181 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.165639 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj"] Sep 30 18:30:00 crc kubenswrapper[4688]: E0930 18:30:00.166797 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="extract-utilities" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.166818 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="extract-utilities" Sep 30 18:30:00 crc kubenswrapper[4688]: E0930 18:30:00.166842 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="extract-content" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.166854 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="extract-content" Sep 30 18:30:00 crc kubenswrapper[4688]: E0930 18:30:00.166903 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="registry-server" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.166913 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="registry-server" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.167204 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfa954b-4088-4b68-adf7-f05004bbec41" containerName="registry-server" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.168066 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.170308 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.170406 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.185789 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj"] Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.259318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.259375 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.259416 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbr5\" (UniqueName: \"kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.361624 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.361691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.361729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbr5\" (UniqueName: \"kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.362717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.368375 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.379256 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbr5\" (UniqueName: \"kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5\") pod \"collect-profiles-29320950-jr9tj\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.494543 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:00 crc kubenswrapper[4688]: I0930 18:30:00.960461 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj"] Sep 30 18:30:01 crc kubenswrapper[4688]: I0930 18:30:01.309720 4688 generic.go:334] "Generic (PLEG): container finished" podID="969244cd-157b-4d9d-9aab-9d8a15f4a978" containerID="5d0bc479de96fa43cb5ce44a33d024fbe3429c2b19237ebe3580e163ec1c523b" exitCode=0 Sep 30 18:30:01 crc kubenswrapper[4688]: I0930 18:30:01.309763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" event={"ID":"969244cd-157b-4d9d-9aab-9d8a15f4a978","Type":"ContainerDied","Data":"5d0bc479de96fa43cb5ce44a33d024fbe3429c2b19237ebe3580e163ec1c523b"} Sep 30 18:30:01 crc kubenswrapper[4688]: I0930 18:30:01.309788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" event={"ID":"969244cd-157b-4d9d-9aab-9d8a15f4a978","Type":"ContainerStarted","Data":"a9b5d0791c8ade4352090003f4957b27213271b61e19cabf2f557dcb904172e1"} Sep 30 18:30:01 crc kubenswrapper[4688]: I0930 18:30:01.430190 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:30:01 crc kubenswrapper[4688]: E0930 18:30:01.430456 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.752969 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.912190 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume\") pod \"969244cd-157b-4d9d-9aab-9d8a15f4a978\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.912312 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume\") pod \"969244cd-157b-4d9d-9aab-9d8a15f4a978\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.912352 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnbr5\" (UniqueName: \"kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5\") pod \"969244cd-157b-4d9d-9aab-9d8a15f4a978\" (UID: \"969244cd-157b-4d9d-9aab-9d8a15f4a978\") " Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.912946 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume" (OuterVolumeSpecName: "config-volume") pod "969244cd-157b-4d9d-9aab-9d8a15f4a978" (UID: "969244cd-157b-4d9d-9aab-9d8a15f4a978"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.913850 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/969244cd-157b-4d9d-9aab-9d8a15f4a978-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.919018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "969244cd-157b-4d9d-9aab-9d8a15f4a978" (UID: "969244cd-157b-4d9d-9aab-9d8a15f4a978"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:30:02 crc kubenswrapper[4688]: I0930 18:30:02.919956 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5" (OuterVolumeSpecName: "kube-api-access-bnbr5") pod "969244cd-157b-4d9d-9aab-9d8a15f4a978" (UID: "969244cd-157b-4d9d-9aab-9d8a15f4a978"). InnerVolumeSpecName "kube-api-access-bnbr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.015599 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/969244cd-157b-4d9d-9aab-9d8a15f4a978-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.015642 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnbr5\" (UniqueName: \"kubernetes.io/projected/969244cd-157b-4d9d-9aab-9d8a15f4a978-kube-api-access-bnbr5\") on node \"crc\" DevicePath \"\"" Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.329982 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" event={"ID":"969244cd-157b-4d9d-9aab-9d8a15f4a978","Type":"ContainerDied","Data":"a9b5d0791c8ade4352090003f4957b27213271b61e19cabf2f557dcb904172e1"} Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.330022 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b5d0791c8ade4352090003f4957b27213271b61e19cabf2f557dcb904172e1" Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.330113 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320950-jr9tj" Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.833035 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6"] Sep 30 18:30:03 crc kubenswrapper[4688]: I0930 18:30:03.841914 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320905-rwht6"] Sep 30 18:30:05 crc kubenswrapper[4688]: I0930 18:30:05.447017 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39961bba-e5a0-4319-b834-b40db142e8a8" path="/var/lib/kubelet/pods/39961bba-e5a0-4319-b834-b40db142e8a8/volumes" Sep 30 18:30:13 crc kubenswrapper[4688]: I0930 18:30:13.429367 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:30:13 crc kubenswrapper[4688]: E0930 18:30:13.430178 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:30:27 crc kubenswrapper[4688]: I0930 18:30:27.431208 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:30:27 crc kubenswrapper[4688]: E0930 18:30:27.432287 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:30:39 crc kubenswrapper[4688]: I0930 18:30:39.428983 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:30:39 crc kubenswrapper[4688]: E0930 18:30:39.429728 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:30:49 crc kubenswrapper[4688]: I0930 18:30:49.789102 4688 scope.go:117] "RemoveContainer" containerID="b497d6bb8c0467de0b589534e58ec72ecc942f873b161d0a796fc1f7fc71b3d1" Sep 30 18:30:54 crc kubenswrapper[4688]: I0930 18:30:54.429921 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:30:54 crc kubenswrapper[4688]: E0930 18:30:54.431507 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:31:08 crc kubenswrapper[4688]: I0930 18:31:08.429791 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:31:08 crc kubenswrapper[4688]: E0930 18:31:08.430706 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:31:21 crc kubenswrapper[4688]: I0930 18:31:21.429981 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:31:21 crc kubenswrapper[4688]: E0930 18:31:21.430787 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:31:33 crc kubenswrapper[4688]: I0930 18:31:33.436052 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:31:33 crc kubenswrapper[4688]: E0930 18:31:33.437008 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:31:47 crc kubenswrapper[4688]: I0930 18:31:47.430034 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:31:47 crc kubenswrapper[4688]: E0930 18:31:47.430940 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:32:02 crc kubenswrapper[4688]: I0930 18:32:02.429198 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:32:03 crc kubenswrapper[4688]: I0930 18:32:03.489198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2"} Sep 30 18:33:37 crc kubenswrapper[4688]: I0930 18:33:37.863683 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:37 crc kubenswrapper[4688]: E0930 18:33:37.864907 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969244cd-157b-4d9d-9aab-9d8a15f4a978" containerName="collect-profiles" Sep 30 18:33:37 crc kubenswrapper[4688]: I0930 18:33:37.864926 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="969244cd-157b-4d9d-9aab-9d8a15f4a978" containerName="collect-profiles" Sep 30 18:33:37 crc kubenswrapper[4688]: I0930 18:33:37.865099 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="969244cd-157b-4d9d-9aab-9d8a15f4a978" containerName="collect-profiles" Sep 30 18:33:37 crc kubenswrapper[4688]: I0930 18:33:37.866441 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:37 crc kubenswrapper[4688]: I0930 18:33:37.877302 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.002939 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.003075 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47f59\" (UniqueName: \"kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.003176 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.104726 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.105039 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47f59\" (UniqueName: \"kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.105186 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.105692 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.105829 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.137702 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47f59\" (UniqueName: \"kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59\") pod \"certified-operators-7xp4d\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.205910 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:38 crc kubenswrapper[4688]: I0930 18:33:38.739256 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:38 crc kubenswrapper[4688]: W0930 18:33:38.742943 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6be204c_f0b5_4585_9df6_3ce954a29110.slice/crio-d14c88edf33808a6a24d788eebe3779a54b29aaedf0f2431f696cf52dc4b4677 WatchSource:0}: Error finding container d14c88edf33808a6a24d788eebe3779a54b29aaedf0f2431f696cf52dc4b4677: Status 404 returned error can't find the container with id d14c88edf33808a6a24d788eebe3779a54b29aaedf0f2431f696cf52dc4b4677 Sep 30 18:33:39 crc kubenswrapper[4688]: I0930 18:33:39.521406 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerID="21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d" exitCode=0 Sep 30 18:33:39 crc kubenswrapper[4688]: I0930 18:33:39.521513 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerDied","Data":"21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d"} Sep 30 18:33:39 crc kubenswrapper[4688]: I0930 18:33:39.521802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerStarted","Data":"d14c88edf33808a6a24d788eebe3779a54b29aaedf0f2431f696cf52dc4b4677"} Sep 30 18:33:39 crc kubenswrapper[4688]: I0930 18:33:39.523977 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:33:40 crc kubenswrapper[4688]: I0930 18:33:40.538127 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerStarted","Data":"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df"} Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.453679 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.459307 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.462506 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.552584 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerID="78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df" exitCode=0 Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.552625 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerDied","Data":"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df"} Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.572886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbghn\" (UniqueName: \"kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.573162 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.573315 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.674852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.674930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.675030 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbghn\" (UniqueName: \"kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.675784 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.676023 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.692819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbghn\" (UniqueName: \"kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn\") pod \"redhat-operators-bcw2f\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:41 crc kubenswrapper[4688]: I0930 18:33:41.795046 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.284496 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.562952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerStarted","Data":"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f"} Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.565948 4688 generic.go:334] "Generic (PLEG): container finished" podID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerID="bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b" exitCode=0 Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.566047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerDied","Data":"bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b"} Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.566116 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerStarted","Data":"465a909a582ebcec2f8210a47c52f79b4a1cf787195b64b623ce0ec9d896f326"} Sep 30 18:33:42 crc kubenswrapper[4688]: I0930 18:33:42.590614 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xp4d" podStartSLOduration=3.022197169 podStartE2EDuration="5.590588654s" podCreationTimestamp="2025-09-30 18:33:37 +0000 UTC" firstStartedPulling="2025-09-30 18:33:39.523710803 +0000 UTC m=+4676.819148361" lastFinishedPulling="2025-09-30 18:33:42.092102298 +0000 UTC m=+4679.387539846" observedRunningTime="2025-09-30 18:33:42.586675463 +0000 UTC m=+4679.882113021" watchObservedRunningTime="2025-09-30 18:33:42.590588654 +0000 UTC m=+4679.886026222" Sep 30 18:33:44 crc kubenswrapper[4688]: I0930 18:33:44.589471 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerStarted","Data":"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e"} Sep 30 18:33:45 crc kubenswrapper[4688]: I0930 18:33:45.601891 4688 generic.go:334] "Generic (PLEG): container finished" podID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerID="0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e" exitCode=0 Sep 30 18:33:45 crc kubenswrapper[4688]: I0930 18:33:45.601981 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerDied","Data":"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e"} Sep 30 18:33:46 crc kubenswrapper[4688]: I0930 18:33:46.614563 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerStarted","Data":"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a"} Sep 30 18:33:46 crc kubenswrapper[4688]: I0930 18:33:46.637078 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcw2f" podStartSLOduration=2.030452602 podStartE2EDuration="5.637055468s" podCreationTimestamp="2025-09-30 18:33:41 +0000 UTC" firstStartedPulling="2025-09-30 18:33:42.567803374 +0000 UTC m=+4679.863240932" lastFinishedPulling="2025-09-30 18:33:46.17440625 +0000 UTC m=+4683.469843798" observedRunningTime="2025-09-30 18:33:46.632812708 +0000 UTC m=+4683.928250286" watchObservedRunningTime="2025-09-30 18:33:46.637055468 +0000 UTC m=+4683.932493026" Sep 30 18:33:48 crc kubenswrapper[4688]: I0930 18:33:48.207027 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:48 crc kubenswrapper[4688]: I0930 18:33:48.208508 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:48 crc kubenswrapper[4688]: I0930 18:33:48.261334 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:48 crc kubenswrapper[4688]: I0930 18:33:48.685219 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:49 crc kubenswrapper[4688]: I0930 18:33:49.445659 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:50 crc kubenswrapper[4688]: I0930 18:33:50.655099 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7xp4d" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="registry-server" containerID="cri-o://7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f" gracePeriod=2 Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.193393 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.276403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content\") pod \"e6be204c-f0b5-4585-9df6-3ce954a29110\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.276649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47f59\" (UniqueName: \"kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59\") pod \"e6be204c-f0b5-4585-9df6-3ce954a29110\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.276730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities\") pod \"e6be204c-f0b5-4585-9df6-3ce954a29110\" (UID: \"e6be204c-f0b5-4585-9df6-3ce954a29110\") " Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.277265 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities" (OuterVolumeSpecName: "utilities") pod "e6be204c-f0b5-4585-9df6-3ce954a29110" (UID: "e6be204c-f0b5-4585-9df6-3ce954a29110"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.278502 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.282790 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59" (OuterVolumeSpecName: "kube-api-access-47f59") pod "e6be204c-f0b5-4585-9df6-3ce954a29110" (UID: "e6be204c-f0b5-4585-9df6-3ce954a29110"). InnerVolumeSpecName "kube-api-access-47f59". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.316643 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6be204c-f0b5-4585-9df6-3ce954a29110" (UID: "e6be204c-f0b5-4585-9df6-3ce954a29110"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.381426 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6be204c-f0b5-4585-9df6-3ce954a29110-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.381505 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47f59\" (UniqueName: \"kubernetes.io/projected/e6be204c-f0b5-4585-9df6-3ce954a29110-kube-api-access-47f59\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.678449 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerID="7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f" exitCode=0 Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.678498 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerDied","Data":"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f"} Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.678539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xp4d" event={"ID":"e6be204c-f0b5-4585-9df6-3ce954a29110","Type":"ContainerDied","Data":"d14c88edf33808a6a24d788eebe3779a54b29aaedf0f2431f696cf52dc4b4677"} Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.678544 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xp4d" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.678566 4688 scope.go:117] "RemoveContainer" containerID="7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.710259 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.713601 4688 scope.go:117] "RemoveContainer" containerID="78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.724003 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7xp4d"] Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.741296 4688 scope.go:117] "RemoveContainer" containerID="21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.794375 4688 scope.go:117] "RemoveContainer" containerID="7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f" Sep 30 18:33:51 crc kubenswrapper[4688]: E0930 18:33:51.794884 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f\": container with ID starting with 7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f not found: ID does not exist" containerID="7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.794926 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f"} err="failed to get container status \"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f\": rpc error: code = NotFound desc = could not find container \"7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f\": container with ID starting with 7ebc450b3b1f85e3e6d7269cdb170deb05155aa08de664f97e2599e248c8920f not found: ID does not exist" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.794955 4688 scope.go:117] "RemoveContainer" containerID="78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.795138 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.795172 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:51 crc kubenswrapper[4688]: E0930 18:33:51.795382 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df\": container with ID starting with 78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df not found: ID does not exist" containerID="78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.795438 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df"} err="failed to get container status \"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df\": rpc error: code = NotFound desc = could not find container \"78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df\": container with ID starting with 78dd7272bfc54d081656187af09e1b63bf4f3b05edf804fe587980079231b6df not found: ID does not exist" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.795468 4688 scope.go:117] "RemoveContainer" containerID="21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d" Sep 30 18:33:51 crc kubenswrapper[4688]: E0930 18:33:51.796004 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d\": container with ID starting with 21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d not found: ID does not exist" containerID="21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.796056 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d"} err="failed to get container status \"21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d\": rpc error: code = NotFound desc = could not find container \"21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d\": container with ID starting with 21d79ff8ce3241cc1ff624b77e061e9621a2e028447d9abda7fe28f2d79bb58d not found: ID does not exist" Sep 30 18:33:51 crc kubenswrapper[4688]: I0930 18:33:51.848851 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:52 crc kubenswrapper[4688]: I0930 18:33:52.738998 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:53 crc kubenswrapper[4688]: I0930 18:33:53.446203 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" path="/var/lib/kubelet/pods/e6be204c-f0b5-4585-9df6-3ce954a29110/volumes" Sep 30 18:33:54 crc kubenswrapper[4688]: I0930 18:33:54.232003 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:54 crc kubenswrapper[4688]: I0930 18:33:54.706596 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcw2f" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="registry-server" containerID="cri-o://065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a" gracePeriod=2 Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.309767 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.482365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content\") pod \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.483630 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities\") pod \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.483717 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbghn\" (UniqueName: \"kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn\") pod \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\" (UID: \"761d8ac0-4aa6-4699-afaf-0d26f0529f07\") " Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.485615 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities" (OuterVolumeSpecName: "utilities") pod "761d8ac0-4aa6-4699-afaf-0d26f0529f07" (UID: "761d8ac0-4aa6-4699-afaf-0d26f0529f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.578866 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "761d8ac0-4aa6-4699-afaf-0d26f0529f07" (UID: "761d8ac0-4aa6-4699-afaf-0d26f0529f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.586983 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.587029 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761d8ac0-4aa6-4699-afaf-0d26f0529f07-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.716650 4688 generic.go:334] "Generic (PLEG): container finished" podID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerID="065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a" exitCode=0 Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.716704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerDied","Data":"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a"} Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.716737 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcw2f" event={"ID":"761d8ac0-4aa6-4699-afaf-0d26f0529f07","Type":"ContainerDied","Data":"465a909a582ebcec2f8210a47c52f79b4a1cf787195b64b623ce0ec9d896f326"} Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.716758 4688 scope.go:117] "RemoveContainer" containerID="065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.716942 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcw2f" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.737635 4688 scope.go:117] "RemoveContainer" containerID="0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.786884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn" (OuterVolumeSpecName: "kube-api-access-gbghn") pod "761d8ac0-4aa6-4699-afaf-0d26f0529f07" (UID: "761d8ac0-4aa6-4699-afaf-0d26f0529f07"). InnerVolumeSpecName "kube-api-access-gbghn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.791315 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbghn\" (UniqueName: \"kubernetes.io/projected/761d8ac0-4aa6-4699-afaf-0d26f0529f07-kube-api-access-gbghn\") on node \"crc\" DevicePath \"\"" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.801182 4688 scope.go:117] "RemoveContainer" containerID="bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.880511 4688 scope.go:117] "RemoveContainer" containerID="065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a" Sep 30 18:33:55 crc kubenswrapper[4688]: E0930 18:33:55.880879 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a\": container with ID starting with 065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a not found: ID does not exist" containerID="065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.880916 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a"} err="failed to get container status \"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a\": rpc error: code = NotFound desc = could not find container \"065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a\": container with ID starting with 065a74eb143649bb0357f5f3a0e2b52232a35933d18d74a6f3c0b0a7602d752a not found: ID does not exist" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.880943 4688 scope.go:117] "RemoveContainer" containerID="0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e" Sep 30 18:33:55 crc kubenswrapper[4688]: E0930 18:33:55.881179 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e\": container with ID starting with 0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e not found: ID does not exist" containerID="0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.881208 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e"} err="failed to get container status \"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e\": rpc error: code = NotFound desc = could not find container \"0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e\": container with ID starting with 0282626d73e1a8e2b5229666c5629221367d7bb28785ae33841637438defd34e not found: ID does not exist" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.881224 4688 scope.go:117] "RemoveContainer" containerID="bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b" Sep 30 18:33:55 crc kubenswrapper[4688]: E0930 18:33:55.881425 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b\": container with ID starting with bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b not found: ID does not exist" containerID="bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b" Sep 30 18:33:55 crc kubenswrapper[4688]: I0930 18:33:55.881452 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b"} err="failed to get container status \"bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b\": rpc error: code = NotFound desc = could not find container \"bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b\": container with ID starting with bdc159b70fe9cca485e5c75b32696c5ed10600d4d2d12d7bbbb271a4d423089b not found: ID does not exist" Sep 30 18:33:56 crc kubenswrapper[4688]: I0930 18:33:56.072435 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:56 crc kubenswrapper[4688]: I0930 18:33:56.084773 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcw2f"] Sep 30 18:33:57 crc kubenswrapper[4688]: I0930 18:33:57.441434 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" path="/var/lib/kubelet/pods/761d8ac0-4aa6-4699-afaf-0d26f0529f07/volumes" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.004348 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005414 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005432 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005448 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="extract-content" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005457 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="extract-content" Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005486 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="extract-utilities" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005499 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="extract-utilities" Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005528 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="extract-utilities" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005538 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="extract-utilities" Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005551 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="extract-content" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005560 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="extract-content" Sep 30 18:34:21 crc kubenswrapper[4688]: E0930 18:34:21.005605 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005614 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005855 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6be204c-f0b5-4585-9df6-3ce954a29110" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.005883 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="761d8ac0-4aa6-4699-afaf-0d26f0529f07" containerName="registry-server" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.007938 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.034455 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.181503 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nhf\" (UniqueName: \"kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.181617 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.181740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.283197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.283310 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.283376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59nhf\" (UniqueName: \"kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.283736 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.283768 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.307394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nhf\" (UniqueName: \"kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf\") pod \"redhat-marketplace-55h5c\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.346464 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.806354 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:21 crc kubenswrapper[4688]: I0930 18:34:21.965466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerStarted","Data":"94315c457869738f680ae6fe7657697bd72176530661b320265a312b9ace4e69"} Sep 30 18:34:22 crc kubenswrapper[4688]: I0930 18:34:22.546266 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:34:22 crc kubenswrapper[4688]: I0930 18:34:22.546757 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:34:22 crc kubenswrapper[4688]: I0930 18:34:22.980195 4688 generic.go:334] "Generic (PLEG): container finished" podID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerID="4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8" exitCode=0 Sep 30 18:34:22 crc kubenswrapper[4688]: I0930 18:34:22.980242 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerDied","Data":"4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8"} Sep 30 18:34:25 crc kubenswrapper[4688]: I0930 18:34:25.016339 4688 generic.go:334] "Generic (PLEG): container finished" podID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerID="f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b" exitCode=0 Sep 30 18:34:25 crc kubenswrapper[4688]: I0930 18:34:25.016451 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerDied","Data":"f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b"} Sep 30 18:34:26 crc kubenswrapper[4688]: I0930 18:34:26.030870 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerStarted","Data":"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013"} Sep 30 18:34:26 crc kubenswrapper[4688]: I0930 18:34:26.056433 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-55h5c" podStartSLOduration=3.599180703 podStartE2EDuration="6.056410581s" podCreationTimestamp="2025-09-30 18:34:20 +0000 UTC" firstStartedPulling="2025-09-30 18:34:22.982634761 +0000 UTC m=+4720.278072319" lastFinishedPulling="2025-09-30 18:34:25.439864629 +0000 UTC m=+4722.735302197" observedRunningTime="2025-09-30 18:34:26.046108395 +0000 UTC m=+4723.341545953" watchObservedRunningTime="2025-09-30 18:34:26.056410581 +0000 UTC m=+4723.351848139" Sep 30 18:34:31 crc kubenswrapper[4688]: I0930 18:34:31.347177 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:31 crc kubenswrapper[4688]: I0930 18:34:31.347899 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:31 crc kubenswrapper[4688]: I0930 18:34:31.402136 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:32 crc kubenswrapper[4688]: I0930 18:34:32.150064 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:33 crc kubenswrapper[4688]: I0930 18:34:33.179354 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.116177 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-55h5c" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="registry-server" containerID="cri-o://8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013" gracePeriod=2 Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.623767 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.761418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59nhf\" (UniqueName: \"kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf\") pod \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.761544 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities\") pod \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.761594 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content\") pod \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\" (UID: \"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db\") " Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.763054 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities" (OuterVolumeSpecName: "utilities") pod "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" (UID: "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.770191 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf" (OuterVolumeSpecName: "kube-api-access-59nhf") pod "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" (UID: "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db"). InnerVolumeSpecName "kube-api-access-59nhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.780283 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" (UID: "f7ebc90c-be5d-44ab-97cc-c85e6a4b86db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.864197 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.864589 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:34:34 crc kubenswrapper[4688]: I0930 18:34:34.864602 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59nhf\" (UniqueName: \"kubernetes.io/projected/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db-kube-api-access-59nhf\") on node \"crc\" DevicePath \"\"" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.126973 4688 generic.go:334] "Generic (PLEG): container finished" podID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerID="8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013" exitCode=0 Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.127046 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerDied","Data":"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013"} Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.127060 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55h5c" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.127104 4688 scope.go:117] "RemoveContainer" containerID="8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.127091 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55h5c" event={"ID":"f7ebc90c-be5d-44ab-97cc-c85e6a4b86db","Type":"ContainerDied","Data":"94315c457869738f680ae6fe7657697bd72176530661b320265a312b9ace4e69"} Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.152383 4688 scope.go:117] "RemoveContainer" containerID="f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.168515 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.178974 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-55h5c"] Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.188447 4688 scope.go:117] "RemoveContainer" containerID="4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.234782 4688 scope.go:117] "RemoveContainer" containerID="8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013" Sep 30 18:34:35 crc kubenswrapper[4688]: E0930 18:34:35.235300 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013\": container with ID starting with 8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013 not found: ID does not exist" containerID="8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.235380 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013"} err="failed to get container status \"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013\": rpc error: code = NotFound desc = could not find container \"8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013\": container with ID starting with 8d78db929192427bbc8050580d9211106b8c8cac0eeb4442f95609cbbf18f013 not found: ID does not exist" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.235424 4688 scope.go:117] "RemoveContainer" containerID="f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b" Sep 30 18:34:35 crc kubenswrapper[4688]: E0930 18:34:35.236156 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b\": container with ID starting with f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b not found: ID does not exist" containerID="f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.236202 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b"} err="failed to get container status \"f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b\": rpc error: code = NotFound desc = could not find container \"f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b\": container with ID starting with f362154a6caa03bab9e81c44b680fe3d09a940d4ef76f9787f2e29c040cdc38b not found: ID does not exist" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.236230 4688 scope.go:117] "RemoveContainer" containerID="4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8" Sep 30 18:34:35 crc kubenswrapper[4688]: E0930 18:34:35.236556 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8\": container with ID starting with 4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8 not found: ID does not exist" containerID="4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.236598 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8"} err="failed to get container status \"4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8\": rpc error: code = NotFound desc = could not find container \"4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8\": container with ID starting with 4d2b82f224aa948dfd11f4dbb5e04a9ae758f5f5f39bec8db86906c4f2e6a9c8 not found: ID does not exist" Sep 30 18:34:35 crc kubenswrapper[4688]: I0930 18:34:35.441504 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" path="/var/lib/kubelet/pods/f7ebc90c-be5d-44ab-97cc-c85e6a4b86db/volumes" Sep 30 18:34:52 crc kubenswrapper[4688]: I0930 18:34:52.545517 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:34:52 crc kubenswrapper[4688]: I0930 18:34:52.546340 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:35:22 crc kubenswrapper[4688]: I0930 18:35:22.546150 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:35:22 crc kubenswrapper[4688]: I0930 18:35:22.546771 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:35:22 crc kubenswrapper[4688]: I0930 18:35:22.546823 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:35:22 crc kubenswrapper[4688]: I0930 18:35:22.547594 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:35:22 crc kubenswrapper[4688]: I0930 18:35:22.547681 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2" gracePeriod=600 Sep 30 18:35:23 crc kubenswrapper[4688]: I0930 18:35:23.572184 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2" exitCode=0 Sep 30 18:35:23 crc kubenswrapper[4688]: I0930 18:35:23.572252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2"} Sep 30 18:35:23 crc kubenswrapper[4688]: I0930 18:35:23.572862 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739"} Sep 30 18:35:23 crc kubenswrapper[4688]: I0930 18:35:23.572888 4688 scope.go:117] "RemoveContainer" containerID="a9dd4b7e0311b38e26487d94b97913579a338e2829757d427fa7c2068f107621" Sep 30 18:37:52 crc kubenswrapper[4688]: I0930 18:37:52.545418 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:37:52 crc kubenswrapper[4688]: I0930 18:37:52.545932 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:38:22 crc kubenswrapper[4688]: I0930 18:38:22.546167 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:38:22 crc kubenswrapper[4688]: I0930 18:38:22.546917 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.545665 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.546258 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.546313 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.547212 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.547286 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" gracePeriod=600 Sep 30 18:38:52 crc kubenswrapper[4688]: E0930 18:38:52.675276 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.730786 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" exitCode=0 Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.730840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739"} Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.730886 4688 scope.go:117] "RemoveContainer" containerID="7aad3fc957476370a2c06af6d3ff879f77e3fc820edcdb8ef71c6c7b187d60d2" Sep 30 18:38:52 crc kubenswrapper[4688]: I0930 18:38:52.731658 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:38:52 crc kubenswrapper[4688]: E0930 18:38:52.731987 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:39:06 crc kubenswrapper[4688]: I0930 18:39:06.429827 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:39:06 crc kubenswrapper[4688]: E0930 18:39:06.430926 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:39:21 crc kubenswrapper[4688]: I0930 18:39:21.431808 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:39:21 crc kubenswrapper[4688]: E0930 18:39:21.432712 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:39:35 crc kubenswrapper[4688]: I0930 18:39:35.430803 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:39:35 crc kubenswrapper[4688]: E0930 18:39:35.432088 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:39:47 crc kubenswrapper[4688]: I0930 18:39:47.428902 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:39:47 crc kubenswrapper[4688]: E0930 18:39:47.429657 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:39:58 crc kubenswrapper[4688]: I0930 18:39:58.429215 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:39:58 crc kubenswrapper[4688]: E0930 18:39:58.430893 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:40:11 crc kubenswrapper[4688]: I0930 18:40:11.429766 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:40:11 crc kubenswrapper[4688]: E0930 18:40:11.430652 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:40:24 crc kubenswrapper[4688]: I0930 18:40:24.429171 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:40:24 crc kubenswrapper[4688]: E0930 18:40:24.430029 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.404423 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:29 crc kubenswrapper[4688]: E0930 18:40:29.405474 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="extract-utilities" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.405492 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="extract-utilities" Sep 30 18:40:29 crc kubenswrapper[4688]: E0930 18:40:29.405541 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="registry-server" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.405550 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="registry-server" Sep 30 18:40:29 crc kubenswrapper[4688]: E0930 18:40:29.405560 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="extract-content" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.405588 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="extract-content" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.405860 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7ebc90c-be5d-44ab-97cc-c85e6a4b86db" containerName="registry-server" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.407160 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.421759 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.494595 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.494651 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmtl\" (UniqueName: \"kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.494702 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.595824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.595875 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmtl\" (UniqueName: \"kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.595929 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.596526 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.596838 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.625980 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmtl\" (UniqueName: \"kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl\") pod \"community-operators-526xd\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:29 crc kubenswrapper[4688]: I0930 18:40:29.728684 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:30 crc kubenswrapper[4688]: I0930 18:40:30.211190 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:30 crc kubenswrapper[4688]: I0930 18:40:30.728843 4688 generic.go:334] "Generic (PLEG): container finished" podID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerID="cfeb57a5672365579c66e6e556d9036ef8133a85dab733689b8f63e04842c7b9" exitCode=0 Sep 30 18:40:30 crc kubenswrapper[4688]: I0930 18:40:30.729018 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerDied","Data":"cfeb57a5672365579c66e6e556d9036ef8133a85dab733689b8f63e04842c7b9"} Sep 30 18:40:30 crc kubenswrapper[4688]: I0930 18:40:30.729196 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerStarted","Data":"1357d5596db124a9640d19f5ca213ab53b779b6c4ef4fcb18df0dc9a2759b6af"} Sep 30 18:40:30 crc kubenswrapper[4688]: I0930 18:40:30.731808 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:40:31 crc kubenswrapper[4688]: I0930 18:40:31.742167 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerStarted","Data":"802f10f4a954f6f444090bd53dc0734d695c4e186a7ae707ff64ae9134539c9a"} Sep 30 18:40:32 crc kubenswrapper[4688]: I0930 18:40:32.751313 4688 generic.go:334] "Generic (PLEG): container finished" podID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerID="802f10f4a954f6f444090bd53dc0734d695c4e186a7ae707ff64ae9134539c9a" exitCode=0 Sep 30 18:40:32 crc kubenswrapper[4688]: I0930 18:40:32.751369 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerDied","Data":"802f10f4a954f6f444090bd53dc0734d695c4e186a7ae707ff64ae9134539c9a"} Sep 30 18:40:33 crc kubenswrapper[4688]: I0930 18:40:33.761069 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerStarted","Data":"3d12e6f92daa7f92ec309fcc49e584cd567e6ddf02b8753fc797f48ffb2640ea"} Sep 30 18:40:33 crc kubenswrapper[4688]: I0930 18:40:33.796337 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-526xd" podStartSLOduration=2.250675458 podStartE2EDuration="4.796307425s" podCreationTimestamp="2025-09-30 18:40:29 +0000 UTC" firstStartedPulling="2025-09-30 18:40:30.731545998 +0000 UTC m=+5088.026983546" lastFinishedPulling="2025-09-30 18:40:33.277177955 +0000 UTC m=+5090.572615513" observedRunningTime="2025-09-30 18:40:33.784282394 +0000 UTC m=+5091.079719962" watchObservedRunningTime="2025-09-30 18:40:33.796307425 +0000 UTC m=+5091.091745023" Sep 30 18:40:39 crc kubenswrapper[4688]: I0930 18:40:39.430023 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:40:39 crc kubenswrapper[4688]: E0930 18:40:39.430607 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:40:39 crc kubenswrapper[4688]: I0930 18:40:39.729738 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:39 crc kubenswrapper[4688]: I0930 18:40:39.729804 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:39 crc kubenswrapper[4688]: I0930 18:40:39.791477 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:39 crc kubenswrapper[4688]: I0930 18:40:39.879655 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:40 crc kubenswrapper[4688]: I0930 18:40:40.033758 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:41 crc kubenswrapper[4688]: I0930 18:40:41.843127 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-526xd" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="registry-server" containerID="cri-o://3d12e6f92daa7f92ec309fcc49e584cd567e6ddf02b8753fc797f48ffb2640ea" gracePeriod=2 Sep 30 18:40:42 crc kubenswrapper[4688]: I0930 18:40:42.855055 4688 generic.go:334] "Generic (PLEG): container finished" podID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerID="3d12e6f92daa7f92ec309fcc49e584cd567e6ddf02b8753fc797f48ffb2640ea" exitCode=0 Sep 30 18:40:42 crc kubenswrapper[4688]: I0930 18:40:42.855631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerDied","Data":"3d12e6f92daa7f92ec309fcc49e584cd567e6ddf02b8753fc797f48ffb2640ea"} Sep 30 18:40:42 crc kubenswrapper[4688]: I0930 18:40:42.855667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-526xd" event={"ID":"4805af69-2147-4fb1-9fe5-31f9371fbb54","Type":"ContainerDied","Data":"1357d5596db124a9640d19f5ca213ab53b779b6c4ef4fcb18df0dc9a2759b6af"} Sep 30 18:40:42 crc kubenswrapper[4688]: I0930 18:40:42.855680 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1357d5596db124a9640d19f5ca213ab53b779b6c4ef4fcb18df0dc9a2759b6af" Sep 30 18:40:42 crc kubenswrapper[4688]: I0930 18:40:42.945485 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.066702 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities\") pod \"4805af69-2147-4fb1-9fe5-31f9371fbb54\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.067553 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities" (OuterVolumeSpecName: "utilities") pod "4805af69-2147-4fb1-9fe5-31f9371fbb54" (UID: "4805af69-2147-4fb1-9fe5-31f9371fbb54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.067803 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmtl\" (UniqueName: \"kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl\") pod \"4805af69-2147-4fb1-9fe5-31f9371fbb54\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.067880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content\") pod \"4805af69-2147-4fb1-9fe5-31f9371fbb54\" (UID: \"4805af69-2147-4fb1-9fe5-31f9371fbb54\") " Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.068785 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.076211 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl" (OuterVolumeSpecName: "kube-api-access-mxmtl") pod "4805af69-2147-4fb1-9fe5-31f9371fbb54" (UID: "4805af69-2147-4fb1-9fe5-31f9371fbb54"). InnerVolumeSpecName "kube-api-access-mxmtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.114285 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4805af69-2147-4fb1-9fe5-31f9371fbb54" (UID: "4805af69-2147-4fb1-9fe5-31f9371fbb54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.170884 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmtl\" (UniqueName: \"kubernetes.io/projected/4805af69-2147-4fb1-9fe5-31f9371fbb54-kube-api-access-mxmtl\") on node \"crc\" DevicePath \"\"" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.171139 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4805af69-2147-4fb1-9fe5-31f9371fbb54-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.864620 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-526xd" Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.885218 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:43 crc kubenswrapper[4688]: I0930 18:40:43.892176 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-526xd"] Sep 30 18:40:45 crc kubenswrapper[4688]: I0930 18:40:45.442684 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" path="/var/lib/kubelet/pods/4805af69-2147-4fb1-9fe5-31f9371fbb54/volumes" Sep 30 18:40:51 crc kubenswrapper[4688]: I0930 18:40:51.429708 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:40:51 crc kubenswrapper[4688]: E0930 18:40:51.430660 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:41:05 crc kubenswrapper[4688]: I0930 18:41:05.430587 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:41:05 crc kubenswrapper[4688]: E0930 18:41:05.431315 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:41:20 crc kubenswrapper[4688]: I0930 18:41:20.429343 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:41:20 crc kubenswrapper[4688]: E0930 18:41:20.430437 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:41:34 crc kubenswrapper[4688]: I0930 18:41:34.429934 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:41:34 crc kubenswrapper[4688]: E0930 18:41:34.430748 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:41:45 crc kubenswrapper[4688]: I0930 18:41:45.429666 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:41:45 crc kubenswrapper[4688]: E0930 18:41:45.430760 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:41:56 crc kubenswrapper[4688]: I0930 18:41:56.429370 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:41:56 crc kubenswrapper[4688]: E0930 18:41:56.431342 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:42:10 crc kubenswrapper[4688]: I0930 18:42:10.430199 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:42:10 crc kubenswrapper[4688]: E0930 18:42:10.431063 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:42:24 crc kubenswrapper[4688]: I0930 18:42:24.428933 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:42:24 crc kubenswrapper[4688]: E0930 18:42:24.430544 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:42:38 crc kubenswrapper[4688]: I0930 18:42:38.429337 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:42:38 crc kubenswrapper[4688]: E0930 18:42:38.430092 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:42:51 crc kubenswrapper[4688]: I0930 18:42:51.429389 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:42:51 crc kubenswrapper[4688]: E0930 18:42:51.431945 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:43:03 crc kubenswrapper[4688]: I0930 18:43:03.437051 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:43:03 crc kubenswrapper[4688]: E0930 18:43:03.437931 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:43:15 crc kubenswrapper[4688]: I0930 18:43:15.430307 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:43:15 crc kubenswrapper[4688]: E0930 18:43:15.431396 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:43:28 crc kubenswrapper[4688]: I0930 18:43:28.429371 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:43:28 crc kubenswrapper[4688]: E0930 18:43:28.430341 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:43:43 crc kubenswrapper[4688]: I0930 18:43:43.465333 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:43:43 crc kubenswrapper[4688]: E0930 18:43:43.466222 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.498598 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:43:48 crc kubenswrapper[4688]: E0930 18:43:48.499645 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="extract-utilities" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.499664 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="extract-utilities" Sep 30 18:43:48 crc kubenswrapper[4688]: E0930 18:43:48.499714 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="registry-server" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.499726 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="registry-server" Sep 30 18:43:48 crc kubenswrapper[4688]: E0930 18:43:48.499744 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="extract-content" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.499752 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="extract-content" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.500014 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4805af69-2147-4fb1-9fe5-31f9371fbb54" containerName="registry-server" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.502040 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.513802 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.554241 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.554350 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.554376 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkf5\" (UniqueName: \"kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.656033 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.656173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.656203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfkf5\" (UniqueName: \"kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.656709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.656761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.680389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfkf5\" (UniqueName: \"kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5\") pod \"certified-operators-ngd4p\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:48 crc kubenswrapper[4688]: I0930 18:43:48.827391 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:49 crc kubenswrapper[4688]: I0930 18:43:49.292542 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:43:49 crc kubenswrapper[4688]: I0930 18:43:49.684107 4688 generic.go:334] "Generic (PLEG): container finished" podID="8930a821-aa80-42c3-b13e-657dc0347545" containerID="4215059058fb79bb06d8d5c68183039765418a5d2dfd87ac569ba17941cf8bd8" exitCode=0 Sep 30 18:43:49 crc kubenswrapper[4688]: I0930 18:43:49.684155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerDied","Data":"4215059058fb79bb06d8d5c68183039765418a5d2dfd87ac569ba17941cf8bd8"} Sep 30 18:43:49 crc kubenswrapper[4688]: I0930 18:43:49.684184 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerStarted","Data":"40f41d81395a6bdea0445da1159e26ae84c7f934827c9c5ae9bf78a9dab3dd70"} Sep 30 18:43:51 crc kubenswrapper[4688]: I0930 18:43:51.708598 4688 generic.go:334] "Generic (PLEG): container finished" podID="8930a821-aa80-42c3-b13e-657dc0347545" containerID="74748c3a15cd226188e4277342f2c9dbe923db9a806f372c4e27046d52ffb88d" exitCode=0 Sep 30 18:43:51 crc kubenswrapper[4688]: I0930 18:43:51.708731 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerDied","Data":"74748c3a15cd226188e4277342f2c9dbe923db9a806f372c4e27046d52ffb88d"} Sep 30 18:43:52 crc kubenswrapper[4688]: I0930 18:43:52.719461 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerStarted","Data":"ff911b72093154e094e7c43b49bf4589e8af35aba2720691d8cd442d090f2393"} Sep 30 18:43:52 crc kubenswrapper[4688]: I0930 18:43:52.741437 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ngd4p" podStartSLOduration=2.235992735 podStartE2EDuration="4.741418904s" podCreationTimestamp="2025-09-30 18:43:48 +0000 UTC" firstStartedPulling="2025-09-30 18:43:49.686117223 +0000 UTC m=+5286.981554781" lastFinishedPulling="2025-09-30 18:43:52.191543372 +0000 UTC m=+5289.486980950" observedRunningTime="2025-09-30 18:43:52.738668353 +0000 UTC m=+5290.034105951" watchObservedRunningTime="2025-09-30 18:43:52.741418904 +0000 UTC m=+5290.036856462" Sep 30 18:43:55 crc kubenswrapper[4688]: I0930 18:43:55.435745 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:43:55 crc kubenswrapper[4688]: I0930 18:43:55.752314 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58"} Sep 30 18:43:58 crc kubenswrapper[4688]: I0930 18:43:58.828024 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:58 crc kubenswrapper[4688]: I0930 18:43:58.828643 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:58 crc kubenswrapper[4688]: I0930 18:43:58.871607 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:59 crc kubenswrapper[4688]: I0930 18:43:59.844671 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:43:59 crc kubenswrapper[4688]: I0930 18:43:59.892553 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:44:01 crc kubenswrapper[4688]: I0930 18:44:01.812822 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ngd4p" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="registry-server" containerID="cri-o://ff911b72093154e094e7c43b49bf4589e8af35aba2720691d8cd442d090f2393" gracePeriod=2 Sep 30 18:44:02 crc kubenswrapper[4688]: I0930 18:44:02.823988 4688 generic.go:334] "Generic (PLEG): container finished" podID="8930a821-aa80-42c3-b13e-657dc0347545" containerID="ff911b72093154e094e7c43b49bf4589e8af35aba2720691d8cd442d090f2393" exitCode=0 Sep 30 18:44:02 crc kubenswrapper[4688]: I0930 18:44:02.824062 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerDied","Data":"ff911b72093154e094e7c43b49bf4589e8af35aba2720691d8cd442d090f2393"} Sep 30 18:44:02 crc kubenswrapper[4688]: I0930 18:44:02.957242 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.112365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfkf5\" (UniqueName: \"kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5\") pod \"8930a821-aa80-42c3-b13e-657dc0347545\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.112487 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities\") pod \"8930a821-aa80-42c3-b13e-657dc0347545\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.112666 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content\") pod \"8930a821-aa80-42c3-b13e-657dc0347545\" (UID: \"8930a821-aa80-42c3-b13e-657dc0347545\") " Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.117047 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities" (OuterVolumeSpecName: "utilities") pod "8930a821-aa80-42c3-b13e-657dc0347545" (UID: "8930a821-aa80-42c3-b13e-657dc0347545"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.121161 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5" (OuterVolumeSpecName: "kube-api-access-nfkf5") pod "8930a821-aa80-42c3-b13e-657dc0347545" (UID: "8930a821-aa80-42c3-b13e-657dc0347545"). InnerVolumeSpecName "kube-api-access-nfkf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.162959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8930a821-aa80-42c3-b13e-657dc0347545" (UID: "8930a821-aa80-42c3-b13e-657dc0347545"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.216350 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfkf5\" (UniqueName: \"kubernetes.io/projected/8930a821-aa80-42c3-b13e-657dc0347545-kube-api-access-nfkf5\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.216386 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.216396 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8930a821-aa80-42c3-b13e-657dc0347545-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.836705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngd4p" event={"ID":"8930a821-aa80-42c3-b13e-657dc0347545","Type":"ContainerDied","Data":"40f41d81395a6bdea0445da1159e26ae84c7f934827c9c5ae9bf78a9dab3dd70"} Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.836786 4688 scope.go:117] "RemoveContainer" containerID="ff911b72093154e094e7c43b49bf4589e8af35aba2720691d8cd442d090f2393" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.836789 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngd4p" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.871364 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.881912 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ngd4p"] Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.890116 4688 scope.go:117] "RemoveContainer" containerID="74748c3a15cd226188e4277342f2c9dbe923db9a806f372c4e27046d52ffb88d" Sep 30 18:44:03 crc kubenswrapper[4688]: I0930 18:44:03.927990 4688 scope.go:117] "RemoveContainer" containerID="4215059058fb79bb06d8d5c68183039765418a5d2dfd87ac569ba17941cf8bd8" Sep 30 18:44:05 crc kubenswrapper[4688]: I0930 18:44:05.443195 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8930a821-aa80-42c3-b13e-657dc0347545" path="/var/lib/kubelet/pods/8930a821-aa80-42c3-b13e-657dc0347545/volumes" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.749785 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:19 crc kubenswrapper[4688]: E0930 18:44:19.750767 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="registry-server" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.750783 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="registry-server" Sep 30 18:44:19 crc kubenswrapper[4688]: E0930 18:44:19.750805 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="extract-utilities" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.750814 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="extract-utilities" Sep 30 18:44:19 crc kubenswrapper[4688]: E0930 18:44:19.750824 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="extract-content" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.750832 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="extract-content" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.751045 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8930a821-aa80-42c3-b13e-657dc0347545" containerName="registry-server" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.752447 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.792954 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.850610 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.850686 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glm7\" (UniqueName: \"kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.850758 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.952467 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.952536 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glm7\" (UniqueName: \"kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.952643 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.952997 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.953036 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:19 crc kubenswrapper[4688]: I0930 18:44:19.985560 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glm7\" (UniqueName: \"kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7\") pod \"redhat-operators-qjq4p\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:20 crc kubenswrapper[4688]: I0930 18:44:20.098442 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:20 crc kubenswrapper[4688]: I0930 18:44:20.542253 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:20 crc kubenswrapper[4688]: I0930 18:44:20.997541 4688 generic.go:334] "Generic (PLEG): container finished" podID="57d03a30-afce-4592-b328-d0243d589184" containerID="b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df" exitCode=0 Sep 30 18:44:20 crc kubenswrapper[4688]: I0930 18:44:20.997978 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerDied","Data":"b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df"} Sep 30 18:44:20 crc kubenswrapper[4688]: I0930 18:44:20.998009 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerStarted","Data":"927a0fc463589297c79b6133e3e5b88da322bc7a6b27153283f09ebfafedd16a"} Sep 30 18:44:23 crc kubenswrapper[4688]: I0930 18:44:23.017334 4688 generic.go:334] "Generic (PLEG): container finished" podID="57d03a30-afce-4592-b328-d0243d589184" containerID="bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2" exitCode=0 Sep 30 18:44:23 crc kubenswrapper[4688]: I0930 18:44:23.017448 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerDied","Data":"bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2"} Sep 30 18:44:24 crc kubenswrapper[4688]: I0930 18:44:24.031210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerStarted","Data":"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d"} Sep 30 18:44:30 crc kubenswrapper[4688]: I0930 18:44:30.098713 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:30 crc kubenswrapper[4688]: I0930 18:44:30.099327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:30 crc kubenswrapper[4688]: I0930 18:44:30.234908 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:30 crc kubenswrapper[4688]: I0930 18:44:30.254779 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjq4p" podStartSLOduration=8.707963813 podStartE2EDuration="11.254759612s" podCreationTimestamp="2025-09-30 18:44:19 +0000 UTC" firstStartedPulling="2025-09-30 18:44:20.999547998 +0000 UTC m=+5318.294985556" lastFinishedPulling="2025-09-30 18:44:23.546343787 +0000 UTC m=+5320.841781355" observedRunningTime="2025-09-30 18:44:24.06108967 +0000 UTC m=+5321.356527308" watchObservedRunningTime="2025-09-30 18:44:30.254759612 +0000 UTC m=+5327.550197180" Sep 30 18:44:31 crc kubenswrapper[4688]: I0930 18:44:31.167083 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:31 crc kubenswrapper[4688]: I0930 18:44:31.235472 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.142726 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjq4p" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="registry-server" containerID="cri-o://4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d" gracePeriod=2 Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.599802 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.724980 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities\") pod \"57d03a30-afce-4592-b328-d0243d589184\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.725134 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content\") pod \"57d03a30-afce-4592-b328-d0243d589184\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.725481 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6glm7\" (UniqueName: \"kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7\") pod \"57d03a30-afce-4592-b328-d0243d589184\" (UID: \"57d03a30-afce-4592-b328-d0243d589184\") " Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.726255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities" (OuterVolumeSpecName: "utilities") pod "57d03a30-afce-4592-b328-d0243d589184" (UID: "57d03a30-afce-4592-b328-d0243d589184"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.733053 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7" (OuterVolumeSpecName: "kube-api-access-6glm7") pod "57d03a30-afce-4592-b328-d0243d589184" (UID: "57d03a30-afce-4592-b328-d0243d589184"). InnerVolumeSpecName "kube-api-access-6glm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.808894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57d03a30-afce-4592-b328-d0243d589184" (UID: "57d03a30-afce-4592-b328-d0243d589184"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.828221 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.828278 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57d03a30-afce-4592-b328-d0243d589184-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:33 crc kubenswrapper[4688]: I0930 18:44:33.828300 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6glm7\" (UniqueName: \"kubernetes.io/projected/57d03a30-afce-4592-b328-d0243d589184-kube-api-access-6glm7\") on node \"crc\" DevicePath \"\"" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.155255 4688 generic.go:334] "Generic (PLEG): container finished" podID="57d03a30-afce-4592-b328-d0243d589184" containerID="4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d" exitCode=0 Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.155311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerDied","Data":"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d"} Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.155350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjq4p" event={"ID":"57d03a30-afce-4592-b328-d0243d589184","Type":"ContainerDied","Data":"927a0fc463589297c79b6133e3e5b88da322bc7a6b27153283f09ebfafedd16a"} Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.155379 4688 scope.go:117] "RemoveContainer" containerID="4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.155544 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjq4p" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.181024 4688 scope.go:117] "RemoveContainer" containerID="bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.208527 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.208858 4688 scope.go:117] "RemoveContainer" containerID="b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.218110 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjq4p"] Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.259835 4688 scope.go:117] "RemoveContainer" containerID="4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d" Sep 30 18:44:34 crc kubenswrapper[4688]: E0930 18:44:34.260354 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d\": container with ID starting with 4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d not found: ID does not exist" containerID="4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.260407 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d"} err="failed to get container status \"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d\": rpc error: code = NotFound desc = could not find container \"4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d\": container with ID starting with 4302b3bc6b6c6a11fa93189661d83483a452a7d3c22b0b8da8df4619bea90a3d not found: ID does not exist" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.260457 4688 scope.go:117] "RemoveContainer" containerID="bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2" Sep 30 18:44:34 crc kubenswrapper[4688]: E0930 18:44:34.260822 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2\": container with ID starting with bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2 not found: ID does not exist" containerID="bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.260850 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2"} err="failed to get container status \"bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2\": rpc error: code = NotFound desc = could not find container \"bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2\": container with ID starting with bf7e93836ce364c2d21cb24a752427e0938bb241a5935a94ffc76cd587dae4f2 not found: ID does not exist" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.260869 4688 scope.go:117] "RemoveContainer" containerID="b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df" Sep 30 18:44:34 crc kubenswrapper[4688]: E0930 18:44:34.261185 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df\": container with ID starting with b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df not found: ID does not exist" containerID="b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df" Sep 30 18:44:34 crc kubenswrapper[4688]: I0930 18:44:34.261212 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df"} err="failed to get container status \"b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df\": rpc error: code = NotFound desc = could not find container \"b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df\": container with ID starting with b3c9317da45f9d8bb62ecb30c67b15627720f7b05f55a6cc65379218604f80df not found: ID does not exist" Sep 30 18:44:35 crc kubenswrapper[4688]: I0930 18:44:35.445398 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d03a30-afce-4592-b328-d0243d589184" path="/var/lib/kubelet/pods/57d03a30-afce-4592-b328-d0243d589184/volumes" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.164622 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66"] Sep 30 18:45:00 crc kubenswrapper[4688]: E0930 18:45:00.165758 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="extract-utilities" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.165779 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="extract-utilities" Sep 30 18:45:00 crc kubenswrapper[4688]: E0930 18:45:00.165796 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="extract-content" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.165805 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="extract-content" Sep 30 18:45:00 crc kubenswrapper[4688]: E0930 18:45:00.165827 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="registry-server" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.165835 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="registry-server" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.166076 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d03a30-afce-4592-b328-d0243d589184" containerName="registry-server" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.166867 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.170241 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.170346 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.180039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66"] Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.238079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.238559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.238673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cp99\" (UniqueName: \"kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.340193 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.340253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cp99\" (UniqueName: \"kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.340305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.341132 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.359850 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.360114 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cp99\" (UniqueName: \"kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99\") pod \"collect-profiles-29320965-h2d66\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.497133 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:00 crc kubenswrapper[4688]: W0930 18:45:00.952164 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3baa6406_c888_47d1_b1c7_89fec25c1d9f.slice/crio-ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237 WatchSource:0}: Error finding container ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237: Status 404 returned error can't find the container with id ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237 Sep 30 18:45:00 crc kubenswrapper[4688]: I0930 18:45:00.954913 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66"] Sep 30 18:45:01 crc kubenswrapper[4688]: I0930 18:45:01.442817 4688 generic.go:334] "Generic (PLEG): container finished" podID="3baa6406-c888-47d1-b1c7-89fec25c1d9f" containerID="a008737fdd58fd38cc01e6116b8c528e20a4285525a5b8872ead815d06b4aa1a" exitCode=0 Sep 30 18:45:01 crc kubenswrapper[4688]: I0930 18:45:01.442865 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" event={"ID":"3baa6406-c888-47d1-b1c7-89fec25c1d9f","Type":"ContainerDied","Data":"a008737fdd58fd38cc01e6116b8c528e20a4285525a5b8872ead815d06b4aa1a"} Sep 30 18:45:01 crc kubenswrapper[4688]: I0930 18:45:01.442890 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" event={"ID":"3baa6406-c888-47d1-b1c7-89fec25c1d9f","Type":"ContainerStarted","Data":"ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237"} Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.864450 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.987957 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume\") pod \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.988042 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cp99\" (UniqueName: \"kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99\") pod \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.988099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume\") pod \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\" (UID: \"3baa6406-c888-47d1-b1c7-89fec25c1d9f\") " Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.989157 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume" (OuterVolumeSpecName: "config-volume") pod "3baa6406-c888-47d1-b1c7-89fec25c1d9f" (UID: "3baa6406-c888-47d1-b1c7-89fec25c1d9f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.989286 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3baa6406-c888-47d1-b1c7-89fec25c1d9f-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.995046 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99" (OuterVolumeSpecName: "kube-api-access-8cp99") pod "3baa6406-c888-47d1-b1c7-89fec25c1d9f" (UID: "3baa6406-c888-47d1-b1c7-89fec25c1d9f"). InnerVolumeSpecName "kube-api-access-8cp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:45:02 crc kubenswrapper[4688]: I0930 18:45:02.995432 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3baa6406-c888-47d1-b1c7-89fec25c1d9f" (UID: "3baa6406-c888-47d1-b1c7-89fec25c1d9f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.090696 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3baa6406-c888-47d1-b1c7-89fec25c1d9f-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.090734 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cp99\" (UniqueName: \"kubernetes.io/projected/3baa6406-c888-47d1-b1c7-89fec25c1d9f-kube-api-access-8cp99\") on node \"crc\" DevicePath \"\"" Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.465646 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" event={"ID":"3baa6406-c888-47d1-b1c7-89fec25c1d9f","Type":"ContainerDied","Data":"ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237"} Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.465689 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9fa41c4cedad5a3d6ac4667bf54929427972e6b3ea4159fef591359fdf5237" Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.465740 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320965-h2d66" Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.958555 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw"] Sep 30 18:45:03 crc kubenswrapper[4688]: I0930 18:45:03.968867 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320920-nf2jw"] Sep 30 18:45:05 crc kubenswrapper[4688]: I0930 18:45:05.466120 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f907ce9-693b-49d7-9505-295b23e4e208" path="/var/lib/kubelet/pods/7f907ce9-693b-49d7-9505-295b23e4e208/volumes" Sep 30 18:45:50 crc kubenswrapper[4688]: I0930 18:45:50.239937 4688 scope.go:117] "RemoveContainer" containerID="89db9c80730a15b937dce7c0d68d1ea437cfc0124c7e97d7bf83f2df7ff2838f" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.762904 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:45:51 crc kubenswrapper[4688]: E0930 18:45:51.763553 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3baa6406-c888-47d1-b1c7-89fec25c1d9f" containerName="collect-profiles" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.763580 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3baa6406-c888-47d1-b1c7-89fec25c1d9f" containerName="collect-profiles" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.763777 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3baa6406-c888-47d1-b1c7-89fec25c1d9f" containerName="collect-profiles" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.765052 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.775266 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.925349 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.925510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfgd\" (UniqueName: \"kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:51 crc kubenswrapper[4688]: I0930 18:45:51.925534 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.027651 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qfgd\" (UniqueName: \"kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.027705 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.027783 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.028352 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.028448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.048835 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qfgd\" (UniqueName: \"kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd\") pod \"redhat-marketplace-c44vt\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.090509 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.536598 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.967875 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerID="f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55" exitCode=0 Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.971151 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.974200 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerDied","Data":"f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55"} Sep 30 18:45:52 crc kubenswrapper[4688]: I0930 18:45:52.974338 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerStarted","Data":"97b82cf43a7415742024636d1eb779bf985b8651066949b530c9bb6b332560cb"} Sep 30 18:45:53 crc kubenswrapper[4688]: I0930 18:45:53.982685 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerID="8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993" exitCode=0 Sep 30 18:45:53 crc kubenswrapper[4688]: I0930 18:45:53.982802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerDied","Data":"8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993"} Sep 30 18:45:54 crc kubenswrapper[4688]: I0930 18:45:54.996895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerStarted","Data":"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21"} Sep 30 18:45:55 crc kubenswrapper[4688]: I0930 18:45:55.015758 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c44vt" podStartSLOduration=2.305312352 podStartE2EDuration="4.015738533s" podCreationTimestamp="2025-09-30 18:45:51 +0000 UTC" firstStartedPulling="2025-09-30 18:45:52.970923017 +0000 UTC m=+5410.266360575" lastFinishedPulling="2025-09-30 18:45:54.681349198 +0000 UTC m=+5411.976786756" observedRunningTime="2025-09-30 18:45:55.014420939 +0000 UTC m=+5412.309858497" watchObservedRunningTime="2025-09-30 18:45:55.015738533 +0000 UTC m=+5412.311176091" Sep 30 18:46:02 crc kubenswrapper[4688]: I0930 18:46:02.091544 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:02 crc kubenswrapper[4688]: I0930 18:46:02.092160 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:02 crc kubenswrapper[4688]: I0930 18:46:02.166262 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:03 crc kubenswrapper[4688]: I0930 18:46:03.150695 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:03 crc kubenswrapper[4688]: I0930 18:46:03.203453 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.096623 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c44vt" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="registry-server" containerID="cri-o://786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21" gracePeriod=2 Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.573197 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.606716 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content\") pod \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.606829 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities\") pod \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.606907 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qfgd\" (UniqueName: \"kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd\") pod \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\" (UID: \"7f1641a7-091f-40d7-9b7a-298f5aad7bbd\") " Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.607852 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities" (OuterVolumeSpecName: "utilities") pod "7f1641a7-091f-40d7-9b7a-298f5aad7bbd" (UID: "7f1641a7-091f-40d7-9b7a-298f5aad7bbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.613334 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd" (OuterVolumeSpecName: "kube-api-access-9qfgd") pod "7f1641a7-091f-40d7-9b7a-298f5aad7bbd" (UID: "7f1641a7-091f-40d7-9b7a-298f5aad7bbd"). InnerVolumeSpecName "kube-api-access-9qfgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.621040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f1641a7-091f-40d7-9b7a-298f5aad7bbd" (UID: "7f1641a7-091f-40d7-9b7a-298f5aad7bbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.708841 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.708880 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qfgd\" (UniqueName: \"kubernetes.io/projected/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-kube-api-access-9qfgd\") on node \"crc\" DevicePath \"\"" Sep 30 18:46:05 crc kubenswrapper[4688]: I0930 18:46:05.708890 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1641a7-091f-40d7-9b7a-298f5aad7bbd-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.113292 4688 generic.go:334] "Generic (PLEG): container finished" podID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerID="786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21" exitCode=0 Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.113370 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerDied","Data":"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21"} Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.113421 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c44vt" event={"ID":"7f1641a7-091f-40d7-9b7a-298f5aad7bbd","Type":"ContainerDied","Data":"97b82cf43a7415742024636d1eb779bf985b8651066949b530c9bb6b332560cb"} Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.113457 4688 scope.go:117] "RemoveContainer" containerID="786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.113805 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c44vt" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.146736 4688 scope.go:117] "RemoveContainer" containerID="8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.182282 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.192031 4688 scope.go:117] "RemoveContainer" containerID="f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.193735 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c44vt"] Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.235653 4688 scope.go:117] "RemoveContainer" containerID="786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21" Sep 30 18:46:06 crc kubenswrapper[4688]: E0930 18:46:06.236280 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21\": container with ID starting with 786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21 not found: ID does not exist" containerID="786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.236327 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21"} err="failed to get container status \"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21\": rpc error: code = NotFound desc = could not find container \"786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21\": container with ID starting with 786486ec885fcda104f8aa21174b1c7bba44018f45c2728854407a823b154d21 not found: ID does not exist" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.236356 4688 scope.go:117] "RemoveContainer" containerID="8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993" Sep 30 18:46:06 crc kubenswrapper[4688]: E0930 18:46:06.236774 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993\": container with ID starting with 8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993 not found: ID does not exist" containerID="8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.236930 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993"} err="failed to get container status \"8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993\": rpc error: code = NotFound desc = could not find container \"8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993\": container with ID starting with 8d68d762790734169fc6456b36e76afa1ac23c666a6d4a134da5762d057d2993 not found: ID does not exist" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.237034 4688 scope.go:117] "RemoveContainer" containerID="f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55" Sep 30 18:46:06 crc kubenswrapper[4688]: E0930 18:46:06.237456 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55\": container with ID starting with f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55 not found: ID does not exist" containerID="f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55" Sep 30 18:46:06 crc kubenswrapper[4688]: I0930 18:46:06.237484 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55"} err="failed to get container status \"f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55\": rpc error: code = NotFound desc = could not find container \"f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55\": container with ID starting with f8970fb99ab587c91bd3589d31ef0a954f57f891e7aa99975b148d5a7365ac55 not found: ID does not exist" Sep 30 18:46:07 crc kubenswrapper[4688]: I0930 18:46:07.442771 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" path="/var/lib/kubelet/pods/7f1641a7-091f-40d7-9b7a-298f5aad7bbd/volumes" Sep 30 18:46:22 crc kubenswrapper[4688]: I0930 18:46:22.546158 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:46:22 crc kubenswrapper[4688]: I0930 18:46:22.549283 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:46:50 crc kubenswrapper[4688]: I0930 18:46:50.318520 4688 scope.go:117] "RemoveContainer" containerID="802f10f4a954f6f444090bd53dc0734d695c4e186a7ae707ff64ae9134539c9a" Sep 30 18:46:50 crc kubenswrapper[4688]: I0930 18:46:50.353326 4688 scope.go:117] "RemoveContainer" containerID="3d12e6f92daa7f92ec309fcc49e584cd567e6ddf02b8753fc797f48ffb2640ea" Sep 30 18:46:50 crc kubenswrapper[4688]: I0930 18:46:50.388109 4688 scope.go:117] "RemoveContainer" containerID="cfeb57a5672365579c66e6e556d9036ef8133a85dab733689b8f63e04842c7b9" Sep 30 18:46:52 crc kubenswrapper[4688]: I0930 18:46:52.546501 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:46:52 crc kubenswrapper[4688]: I0930 18:46:52.547032 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.546048 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.546820 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.546931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.548272 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.548404 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58" gracePeriod=600 Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.914115 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58" exitCode=0 Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.914426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58"} Sep 30 18:47:22 crc kubenswrapper[4688]: I0930 18:47:22.914470 4688 scope.go:117] "RemoveContainer" containerID="5bec2c2400e0d6aa33408f47b48939cb919d6030bfb5e7a96f962e0a99a4a739" Sep 30 18:47:23 crc kubenswrapper[4688]: I0930 18:47:23.928517 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834"} Sep 30 18:49:22 crc kubenswrapper[4688]: I0930 18:49:22.546111 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:49:22 crc kubenswrapper[4688]: I0930 18:49:22.547036 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:49:52 crc kubenswrapper[4688]: I0930 18:49:52.546856 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:49:52 crc kubenswrapper[4688]: I0930 18:49:52.547482 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.545848 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.546667 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.546758 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.547761 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.547852 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" gracePeriod=600 Sep 30 18:50:22 crc kubenswrapper[4688]: E0930 18:50:22.709080 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.803091 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" exitCode=0 Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.803171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834"} Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.803226 4688 scope.go:117] "RemoveContainer" containerID="632c8db89e47274cd7df046860a400bbaa41b6c0fef23ab6cf509e79368b4a58" Sep 30 18:50:22 crc kubenswrapper[4688]: I0930 18:50:22.803889 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:50:22 crc kubenswrapper[4688]: E0930 18:50:22.804162 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:50:36 crc kubenswrapper[4688]: I0930 18:50:36.429763 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:50:36 crc kubenswrapper[4688]: E0930 18:50:36.430986 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:50:47 crc kubenswrapper[4688]: I0930 18:50:47.429945 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:50:47 crc kubenswrapper[4688]: E0930 18:50:47.431128 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:50:59 crc kubenswrapper[4688]: I0930 18:50:59.429643 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:50:59 crc kubenswrapper[4688]: E0930 18:50:59.430976 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:51:12 crc kubenswrapper[4688]: I0930 18:51:12.429080 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:51:12 crc kubenswrapper[4688]: E0930 18:51:12.430263 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:51:26 crc kubenswrapper[4688]: I0930 18:51:26.429865 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:51:26 crc kubenswrapper[4688]: E0930 18:51:26.430935 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:51:41 crc kubenswrapper[4688]: I0930 18:51:41.430051 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:51:41 crc kubenswrapper[4688]: E0930 18:51:41.431238 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.211326 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:51:51 crc kubenswrapper[4688]: E0930 18:51:51.212344 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="registry-server" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.212360 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="registry-server" Sep 30 18:51:51 crc kubenswrapper[4688]: E0930 18:51:51.212388 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="extract-content" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.212395 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="extract-content" Sep 30 18:51:51 crc kubenswrapper[4688]: E0930 18:51:51.212414 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="extract-utilities" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.212421 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="extract-utilities" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.212622 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1641a7-091f-40d7-9b7a-298f5aad7bbd" containerName="registry-server" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.214201 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.222333 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.409854 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.409939 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vv8\" (UniqueName: \"kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.409999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.511322 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.511819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.512043 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vv8\" (UniqueName: \"kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.512329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.512655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.540880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vv8\" (UniqueName: \"kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8\") pod \"community-operators-t57xd\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:51 crc kubenswrapper[4688]: I0930 18:51:51.541417 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:51:52 crc kubenswrapper[4688]: I0930 18:51:52.069153 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:51:52 crc kubenswrapper[4688]: I0930 18:51:52.730198 4688 generic.go:334] "Generic (PLEG): container finished" podID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerID="e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3" exitCode=0 Sep 30 18:51:52 crc kubenswrapper[4688]: I0930 18:51:52.730262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerDied","Data":"e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3"} Sep 30 18:51:52 crc kubenswrapper[4688]: I0930 18:51:52.730525 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerStarted","Data":"e444866e545a48723c889e6ecdd5b3b027f50b5e4dab81c2ad8a80b625cc1b65"} Sep 30 18:51:52 crc kubenswrapper[4688]: I0930 18:51:52.732220 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:51:54 crc kubenswrapper[4688]: I0930 18:51:54.752491 4688 generic.go:334] "Generic (PLEG): container finished" podID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerID="69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0" exitCode=0 Sep 30 18:51:54 crc kubenswrapper[4688]: I0930 18:51:54.752560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerDied","Data":"69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0"} Sep 30 18:51:55 crc kubenswrapper[4688]: I0930 18:51:55.765902 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerStarted","Data":"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985"} Sep 30 18:51:55 crc kubenswrapper[4688]: I0930 18:51:55.788545 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t57xd" podStartSLOduration=2.301082421 podStartE2EDuration="4.788522021s" podCreationTimestamp="2025-09-30 18:51:51 +0000 UTC" firstStartedPulling="2025-09-30 18:51:52.7320224 +0000 UTC m=+5770.027459958" lastFinishedPulling="2025-09-30 18:51:55.219462 +0000 UTC m=+5772.514899558" observedRunningTime="2025-09-30 18:51:55.785758979 +0000 UTC m=+5773.081196557" watchObservedRunningTime="2025-09-30 18:51:55.788522021 +0000 UTC m=+5773.083959589" Sep 30 18:51:56 crc kubenswrapper[4688]: I0930 18:51:56.429182 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:51:56 crc kubenswrapper[4688]: E0930 18:51:56.429835 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:52:01 crc kubenswrapper[4688]: I0930 18:52:01.543042 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:01 crc kubenswrapper[4688]: I0930 18:52:01.543702 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:01 crc kubenswrapper[4688]: I0930 18:52:01.603978 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:01 crc kubenswrapper[4688]: I0930 18:52:01.873966 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:01 crc kubenswrapper[4688]: I0930 18:52:01.932766 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:52:03 crc kubenswrapper[4688]: I0930 18:52:03.834000 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t57xd" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="registry-server" containerID="cri-o://7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985" gracePeriod=2 Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.334763 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.473077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities\") pod \"1265d4d2-b24d-463b-be81-bd8862ff42a6\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.473213 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content\") pod \"1265d4d2-b24d-463b-be81-bd8862ff42a6\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.473427 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vv8\" (UniqueName: \"kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8\") pod \"1265d4d2-b24d-463b-be81-bd8862ff42a6\" (UID: \"1265d4d2-b24d-463b-be81-bd8862ff42a6\") " Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.475080 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities" (OuterVolumeSpecName: "utilities") pod "1265d4d2-b24d-463b-be81-bd8862ff42a6" (UID: "1265d4d2-b24d-463b-be81-bd8862ff42a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.475724 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.479554 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8" (OuterVolumeSpecName: "kube-api-access-d9vv8") pod "1265d4d2-b24d-463b-be81-bd8862ff42a6" (UID: "1265d4d2-b24d-463b-be81-bd8862ff42a6"). InnerVolumeSpecName "kube-api-access-d9vv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.576533 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vv8\" (UniqueName: \"kubernetes.io/projected/1265d4d2-b24d-463b-be81-bd8862ff42a6-kube-api-access-d9vv8\") on node \"crc\" DevicePath \"\"" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.584001 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1265d4d2-b24d-463b-be81-bd8862ff42a6" (UID: "1265d4d2-b24d-463b-be81-bd8862ff42a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.678276 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1265d4d2-b24d-463b-be81-bd8862ff42a6-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.849516 4688 generic.go:334] "Generic (PLEG): container finished" podID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerID="7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985" exitCode=0 Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.849701 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t57xd" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.851682 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerDied","Data":"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985"} Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.851727 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t57xd" event={"ID":"1265d4d2-b24d-463b-be81-bd8862ff42a6","Type":"ContainerDied","Data":"e444866e545a48723c889e6ecdd5b3b027f50b5e4dab81c2ad8a80b625cc1b65"} Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.851761 4688 scope.go:117] "RemoveContainer" containerID="7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.882620 4688 scope.go:117] "RemoveContainer" containerID="69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.921014 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.925442 4688 scope.go:117] "RemoveContainer" containerID="e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3" Sep 30 18:52:04 crc kubenswrapper[4688]: I0930 18:52:04.936500 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t57xd"] Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.000472 4688 scope.go:117] "RemoveContainer" containerID="7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985" Sep 30 18:52:05 crc kubenswrapper[4688]: E0930 18:52:05.001123 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985\": container with ID starting with 7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985 not found: ID does not exist" containerID="7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.001173 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985"} err="failed to get container status \"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985\": rpc error: code = NotFound desc = could not find container \"7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985\": container with ID starting with 7d5e725abc3bd9a1c12c3dd8a2b7938b3b6d9ccd86f50d12ef56a3ea46efc985 not found: ID does not exist" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.001195 4688 scope.go:117] "RemoveContainer" containerID="69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0" Sep 30 18:52:05 crc kubenswrapper[4688]: E0930 18:52:05.001633 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0\": container with ID starting with 69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0 not found: ID does not exist" containerID="69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.001661 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0"} err="failed to get container status \"69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0\": rpc error: code = NotFound desc = could not find container \"69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0\": container with ID starting with 69ffb1fd1ff988e602f719ca1c9c347ea64112869a458657f165275c32ad5cc0 not found: ID does not exist" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.001677 4688 scope.go:117] "RemoveContainer" containerID="e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3" Sep 30 18:52:05 crc kubenswrapper[4688]: E0930 18:52:05.001953 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3\": container with ID starting with e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3 not found: ID does not exist" containerID="e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.001976 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3"} err="failed to get container status \"e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3\": rpc error: code = NotFound desc = could not find container \"e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3\": container with ID starting with e88c7585f2533c60565437b3a7b3340ffe29b5927cd8d345d21f477a2f4f27b3 not found: ID does not exist" Sep 30 18:52:05 crc kubenswrapper[4688]: I0930 18:52:05.445943 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" path="/var/lib/kubelet/pods/1265d4d2-b24d-463b-be81-bd8862ff42a6/volumes" Sep 30 18:52:08 crc kubenswrapper[4688]: I0930 18:52:08.430699 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:52:08 crc kubenswrapper[4688]: E0930 18:52:08.431803 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:52:20 crc kubenswrapper[4688]: I0930 18:52:20.429037 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:52:20 crc kubenswrapper[4688]: E0930 18:52:20.429973 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:52:32 crc kubenswrapper[4688]: I0930 18:52:32.430158 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:52:32 crc kubenswrapper[4688]: E0930 18:52:32.431487 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:52:44 crc kubenswrapper[4688]: I0930 18:52:44.430760 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:52:44 crc kubenswrapper[4688]: E0930 18:52:44.432182 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:52:55 crc kubenswrapper[4688]: I0930 18:52:55.429338 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:52:55 crc kubenswrapper[4688]: E0930 18:52:55.430245 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:53:06 crc kubenswrapper[4688]: I0930 18:53:06.430032 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:53:06 crc kubenswrapper[4688]: E0930 18:53:06.430815 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:53:20 crc kubenswrapper[4688]: I0930 18:53:20.431263 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:53:20 crc kubenswrapper[4688]: E0930 18:53:20.432432 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:53:35 crc kubenswrapper[4688]: I0930 18:53:35.429524 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:53:35 crc kubenswrapper[4688]: E0930 18:53:35.430390 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:53:48 crc kubenswrapper[4688]: I0930 18:53:48.430613 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:53:48 crc kubenswrapper[4688]: E0930 18:53:48.432245 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:03 crc kubenswrapper[4688]: I0930 18:54:03.439811 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:54:03 crc kubenswrapper[4688]: E0930 18:54:03.441150 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:15 crc kubenswrapper[4688]: I0930 18:54:15.429841 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:54:15 crc kubenswrapper[4688]: E0930 18:54:15.430828 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:30 crc kubenswrapper[4688]: I0930 18:54:30.428970 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:54:30 crc kubenswrapper[4688]: E0930 18:54:30.429787 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.567382 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:54:43 crc kubenswrapper[4688]: E0930 18:54:43.568326 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="registry-server" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.568343 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="registry-server" Sep 30 18:54:43 crc kubenswrapper[4688]: E0930 18:54:43.568371 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="extract-content" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.568380 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="extract-content" Sep 30 18:54:43 crc kubenswrapper[4688]: E0930 18:54:43.568427 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="extract-utilities" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.568436 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="extract-utilities" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.568692 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1265d4d2-b24d-463b-be81-bd8862ff42a6" containerName="registry-server" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.570285 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.579018 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.659287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.659470 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwpf\" (UniqueName: \"kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.659543 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.760765 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.760847 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.760950 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggwpf\" (UniqueName: \"kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.761288 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.761432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.790916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggwpf\" (UniqueName: \"kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf\") pod \"redhat-operators-769js\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:43 crc kubenswrapper[4688]: I0930 18:54:43.895313 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:44 crc kubenswrapper[4688]: I0930 18:54:44.403392 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:54:44 crc kubenswrapper[4688]: I0930 18:54:44.434992 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:54:44 crc kubenswrapper[4688]: E0930 18:54:44.435267 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:44 crc kubenswrapper[4688]: I0930 18:54:44.447968 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerStarted","Data":"5787104a11ce4ac99f06293ae7cc36ecbe5046c6c900f15fbe94912826305c3d"} Sep 30 18:54:45 crc kubenswrapper[4688]: I0930 18:54:45.459209 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2a9089d-4a57-465b-b28c-849a480e0177" containerID="cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f" exitCode=0 Sep 30 18:54:45 crc kubenswrapper[4688]: I0930 18:54:45.459425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerDied","Data":"cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f"} Sep 30 18:54:47 crc kubenswrapper[4688]: I0930 18:54:47.489701 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2a9089d-4a57-465b-b28c-849a480e0177" containerID="7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732" exitCode=0 Sep 30 18:54:47 crc kubenswrapper[4688]: I0930 18:54:47.489796 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerDied","Data":"7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732"} Sep 30 18:54:48 crc kubenswrapper[4688]: I0930 18:54:48.503928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerStarted","Data":"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056"} Sep 30 18:54:48 crc kubenswrapper[4688]: I0930 18:54:48.523117 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-769js" podStartSLOduration=2.949949789 podStartE2EDuration="5.523097397s" podCreationTimestamp="2025-09-30 18:54:43 +0000 UTC" firstStartedPulling="2025-09-30 18:54:45.461734451 +0000 UTC m=+5942.757172009" lastFinishedPulling="2025-09-30 18:54:48.034882059 +0000 UTC m=+5945.330319617" observedRunningTime="2025-09-30 18:54:48.522733988 +0000 UTC m=+5945.818171546" watchObservedRunningTime="2025-09-30 18:54:48.523097397 +0000 UTC m=+5945.818534955" Sep 30 18:54:53 crc kubenswrapper[4688]: I0930 18:54:53.895632 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:53 crc kubenswrapper[4688]: I0930 18:54:53.897120 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:54:54 crc kubenswrapper[4688]: I0930 18:54:54.946233 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-769js" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="registry-server" probeResult="failure" output=< Sep 30 18:54:54 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 18:54:54 crc kubenswrapper[4688]: > Sep 30 18:54:56 crc kubenswrapper[4688]: I0930 18:54:56.429636 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:54:56 crc kubenswrapper[4688]: E0930 18:54:56.430077 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.022940 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.025525 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.036627 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.146247 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpn9z\" (UniqueName: \"kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.147537 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.147880 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.249407 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpn9z\" (UniqueName: \"kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.249515 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.249559 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.250082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.250258 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.269461 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpn9z\" (UniqueName: \"kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z\") pod \"certified-operators-788j5\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.345336 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:54:59 crc kubenswrapper[4688]: I0930 18:54:59.865917 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:55:00 crc kubenswrapper[4688]: I0930 18:55:00.617151 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerID="17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844" exitCode=0 Sep 30 18:55:00 crc kubenswrapper[4688]: I0930 18:55:00.617247 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerDied","Data":"17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844"} Sep 30 18:55:00 crc kubenswrapper[4688]: I0930 18:55:00.617503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerStarted","Data":"54e4cfc651c6210b5a0d855d49766e03b0d5f2bd01d28cf50659b6309f8c0209"} Sep 30 18:55:02 crc kubenswrapper[4688]: I0930 18:55:02.636954 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerID="c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656" exitCode=0 Sep 30 18:55:02 crc kubenswrapper[4688]: I0930 18:55:02.637022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerDied","Data":"c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656"} Sep 30 18:55:03 crc kubenswrapper[4688]: I0930 18:55:03.648021 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerStarted","Data":"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c"} Sep 30 18:55:03 crc kubenswrapper[4688]: I0930 18:55:03.670019 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-788j5" podStartSLOduration=2.25100671 podStartE2EDuration="4.670001547s" podCreationTimestamp="2025-09-30 18:54:59 +0000 UTC" firstStartedPulling="2025-09-30 18:55:00.619913314 +0000 UTC m=+5957.915350872" lastFinishedPulling="2025-09-30 18:55:03.038908141 +0000 UTC m=+5960.334345709" observedRunningTime="2025-09-30 18:55:03.662878213 +0000 UTC m=+5960.958315771" watchObservedRunningTime="2025-09-30 18:55:03.670001547 +0000 UTC m=+5960.965439105" Sep 30 18:55:03 crc kubenswrapper[4688]: I0930 18:55:03.948399 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:55:04 crc kubenswrapper[4688]: I0930 18:55:04.002874 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:55:05 crc kubenswrapper[4688]: I0930 18:55:05.402549 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:55:05 crc kubenswrapper[4688]: I0930 18:55:05.662348 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-769js" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="registry-server" containerID="cri-o://fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056" gracePeriod=2 Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.190560 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.311829 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content\") pod \"f2a9089d-4a57-465b-b28c-849a480e0177\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.311881 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggwpf\" (UniqueName: \"kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf\") pod \"f2a9089d-4a57-465b-b28c-849a480e0177\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.311951 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities\") pod \"f2a9089d-4a57-465b-b28c-849a480e0177\" (UID: \"f2a9089d-4a57-465b-b28c-849a480e0177\") " Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.312552 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities" (OuterVolumeSpecName: "utilities") pod "f2a9089d-4a57-465b-b28c-849a480e0177" (UID: "f2a9089d-4a57-465b-b28c-849a480e0177"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.318237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf" (OuterVolumeSpecName: "kube-api-access-ggwpf") pod "f2a9089d-4a57-465b-b28c-849a480e0177" (UID: "f2a9089d-4a57-465b-b28c-849a480e0177"). InnerVolumeSpecName "kube-api-access-ggwpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.388307 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2a9089d-4a57-465b-b28c-849a480e0177" (UID: "f2a9089d-4a57-465b-b28c-849a480e0177"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.414755 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.414798 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggwpf\" (UniqueName: \"kubernetes.io/projected/f2a9089d-4a57-465b-b28c-849a480e0177-kube-api-access-ggwpf\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.414814 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2a9089d-4a57-465b-b28c-849a480e0177-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.673591 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2a9089d-4a57-465b-b28c-849a480e0177" containerID="fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056" exitCode=0 Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.673643 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerDied","Data":"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056"} Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.673674 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-769js" event={"ID":"f2a9089d-4a57-465b-b28c-849a480e0177","Type":"ContainerDied","Data":"5787104a11ce4ac99f06293ae7cc36ecbe5046c6c900f15fbe94912826305c3d"} Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.673694 4688 scope.go:117] "RemoveContainer" containerID="fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.673744 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-769js" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.693633 4688 scope.go:117] "RemoveContainer" containerID="7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.720473 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.724135 4688 scope.go:117] "RemoveContainer" containerID="cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.727498 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-769js"] Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.771091 4688 scope.go:117] "RemoveContainer" containerID="fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056" Sep 30 18:55:06 crc kubenswrapper[4688]: E0930 18:55:06.771428 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056\": container with ID starting with fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056 not found: ID does not exist" containerID="fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.771469 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056"} err="failed to get container status \"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056\": rpc error: code = NotFound desc = could not find container \"fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056\": container with ID starting with fa3a51e6573e75b92b754df77fffdbbed387a6d198ce830d2fe7e7d5ae8a0056 not found: ID does not exist" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.771493 4688 scope.go:117] "RemoveContainer" containerID="7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732" Sep 30 18:55:06 crc kubenswrapper[4688]: E0930 18:55:06.771938 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732\": container with ID starting with 7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732 not found: ID does not exist" containerID="7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.771966 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732"} err="failed to get container status \"7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732\": rpc error: code = NotFound desc = could not find container \"7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732\": container with ID starting with 7a6288bfa9e2cfc68d912ae275a1697b35eed78ae3f4f2e6e757c5f9e1e8a732 not found: ID does not exist" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.771984 4688 scope.go:117] "RemoveContainer" containerID="cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f" Sep 30 18:55:06 crc kubenswrapper[4688]: E0930 18:55:06.772625 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f\": container with ID starting with cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f not found: ID does not exist" containerID="cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f" Sep 30 18:55:06 crc kubenswrapper[4688]: I0930 18:55:06.772651 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f"} err="failed to get container status \"cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f\": rpc error: code = NotFound desc = could not find container \"cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f\": container with ID starting with cdaca1cdb4f018b53d45b4c5fec3a12ed6e9112c57834c5ac2fded523f12dc1f not found: ID does not exist" Sep 30 18:55:07 crc kubenswrapper[4688]: I0930 18:55:07.440880 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" path="/var/lib/kubelet/pods/f2a9089d-4a57-465b-b28c-849a480e0177/volumes" Sep 30 18:55:09 crc kubenswrapper[4688]: I0930 18:55:09.345766 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:09 crc kubenswrapper[4688]: I0930 18:55:09.346063 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:09 crc kubenswrapper[4688]: I0930 18:55:09.440118 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:09 crc kubenswrapper[4688]: I0930 18:55:09.760970 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:10 crc kubenswrapper[4688]: I0930 18:55:10.430413 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:55:10 crc kubenswrapper[4688]: E0930 18:55:10.430903 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 18:55:10 crc kubenswrapper[4688]: I0930 18:55:10.608848 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:55:11 crc kubenswrapper[4688]: I0930 18:55:11.717818 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-788j5" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="registry-server" containerID="cri-o://ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c" gracePeriod=2 Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.205269 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.340979 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities\") pod \"e6e45eb9-b61f-41c1-8765-682652561f1b\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.341149 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpn9z\" (UniqueName: \"kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z\") pod \"e6e45eb9-b61f-41c1-8765-682652561f1b\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.341253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content\") pod \"e6e45eb9-b61f-41c1-8765-682652561f1b\" (UID: \"e6e45eb9-b61f-41c1-8765-682652561f1b\") " Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.342870 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities" (OuterVolumeSpecName: "utilities") pod "e6e45eb9-b61f-41c1-8765-682652561f1b" (UID: "e6e45eb9-b61f-41c1-8765-682652561f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.347693 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z" (OuterVolumeSpecName: "kube-api-access-kpn9z") pod "e6e45eb9-b61f-41c1-8765-682652561f1b" (UID: "e6e45eb9-b61f-41c1-8765-682652561f1b"). InnerVolumeSpecName "kube-api-access-kpn9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.387143 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6e45eb9-b61f-41c1-8765-682652561f1b" (UID: "e6e45eb9-b61f-41c1-8765-682652561f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.443974 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.444016 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpn9z\" (UniqueName: \"kubernetes.io/projected/e6e45eb9-b61f-41c1-8765-682652561f1b-kube-api-access-kpn9z\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.444027 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e45eb9-b61f-41c1-8765-682652561f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.727762 4688 generic.go:334] "Generic (PLEG): container finished" podID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerID="ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c" exitCode=0 Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.727806 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerDied","Data":"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c"} Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.727834 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-788j5" event={"ID":"e6e45eb9-b61f-41c1-8765-682652561f1b","Type":"ContainerDied","Data":"54e4cfc651c6210b5a0d855d49766e03b0d5f2bd01d28cf50659b6309f8c0209"} Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.727841 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-788j5" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.727855 4688 scope.go:117] "RemoveContainer" containerID="ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.757834 4688 scope.go:117] "RemoveContainer" containerID="c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.762427 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.778303 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-788j5"] Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.786278 4688 scope.go:117] "RemoveContainer" containerID="17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.849086 4688 scope.go:117] "RemoveContainer" containerID="ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c" Sep 30 18:55:12 crc kubenswrapper[4688]: E0930 18:55:12.849544 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c\": container with ID starting with ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c not found: ID does not exist" containerID="ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.849692 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c"} err="failed to get container status \"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c\": rpc error: code = NotFound desc = could not find container \"ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c\": container with ID starting with ec415fdfade4eaef2706a893507f2f182dbf923e4a26b59bb7a6a2c6b64f870c not found: ID does not exist" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.849722 4688 scope.go:117] "RemoveContainer" containerID="c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656" Sep 30 18:55:12 crc kubenswrapper[4688]: E0930 18:55:12.850553 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656\": container with ID starting with c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656 not found: ID does not exist" containerID="c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.850712 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656"} err="failed to get container status \"c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656\": rpc error: code = NotFound desc = could not find container \"c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656\": container with ID starting with c888836da5448624882b77c0fed6afdaf69b3c243a001898734823a6a1ad0656 not found: ID does not exist" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.850743 4688 scope.go:117] "RemoveContainer" containerID="17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844" Sep 30 18:55:12 crc kubenswrapper[4688]: E0930 18:55:12.851239 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844\": container with ID starting with 17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844 not found: ID does not exist" containerID="17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844" Sep 30 18:55:12 crc kubenswrapper[4688]: I0930 18:55:12.851279 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844"} err="failed to get container status \"17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844\": rpc error: code = NotFound desc = could not find container \"17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844\": container with ID starting with 17c640e1ead140fac3063ee3cb986ee1a04246247ee5faa95d6c65743944a844 not found: ID does not exist" Sep 30 18:55:13 crc kubenswrapper[4688]: I0930 18:55:13.440641 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" path="/var/lib/kubelet/pods/e6e45eb9-b61f-41c1-8765-682652561f1b/volumes" Sep 30 18:55:25 crc kubenswrapper[4688]: I0930 18:55:25.429305 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:55:25 crc kubenswrapper[4688]: I0930 18:55:25.858912 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5"} Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.607997 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609020 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609042 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609056 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="extract-utilities" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609067 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="extract-utilities" Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609085 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="extract-content" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609096 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="extract-content" Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609108 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609117 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609181 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="extract-utilities" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609193 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="extract-utilities" Sep 30 18:56:54 crc kubenswrapper[4688]: E0930 18:56:54.609213 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="extract-content" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609223 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="extract-content" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609462 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9089d-4a57-465b-b28c-849a480e0177" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.609513 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e45eb9-b61f-41c1-8765-682652561f1b" containerName="registry-server" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.611316 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.620946 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.746235 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.746325 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdbp\" (UniqueName: \"kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.746391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.848225 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.848358 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdbp\" (UniqueName: \"kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.848459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.848803 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.848915 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.870604 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdbp\" (UniqueName: \"kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp\") pod \"redhat-marketplace-nt5nz\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:54 crc kubenswrapper[4688]: I0930 18:56:54.944718 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.403998 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.739136 4688 generic.go:334] "Generic (PLEG): container finished" podID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerID="fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4" exitCode=0 Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.739244 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerDied","Data":"fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4"} Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.739406 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerStarted","Data":"b59698c7f8b7c73dbb96045fe35b1d973c2f9b22b53fb7e37f4de93cf186803c"} Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.743440 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.996037 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5qb/must-gather-46dzk"] Sep 30 18:56:55 crc kubenswrapper[4688]: I0930 18:56:55.997975 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.000527 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd5qb"/"openshift-service-ca.crt" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.000535 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd5qb"/"default-dockercfg-z9vk4" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.000606 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd5qb"/"kube-root-ca.crt" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.004779 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd5qb/must-gather-46dzk"] Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.067847 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.067985 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztkjq\" (UniqueName: \"kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.169337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztkjq\" (UniqueName: \"kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.169435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.169860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.196483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztkjq\" (UniqueName: \"kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq\") pod \"must-gather-46dzk\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.316016 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.747662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerStarted","Data":"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6"} Sep 30 18:56:56 crc kubenswrapper[4688]: I0930 18:56:56.818999 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd5qb/must-gather-46dzk"] Sep 30 18:56:56 crc kubenswrapper[4688]: W0930 18:56:56.826661 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod657ce608_1270_43be_98bd_4f1705b45916.slice/crio-ed4d8d5af98603a3619b2a910ba3cde61f55a4975477ee93cf98b7122d021c9b WatchSource:0}: Error finding container ed4d8d5af98603a3619b2a910ba3cde61f55a4975477ee93cf98b7122d021c9b: Status 404 returned error can't find the container with id ed4d8d5af98603a3619b2a910ba3cde61f55a4975477ee93cf98b7122d021c9b Sep 30 18:56:57 crc kubenswrapper[4688]: I0930 18:56:57.756696 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/must-gather-46dzk" event={"ID":"657ce608-1270-43be-98bd-4f1705b45916","Type":"ContainerStarted","Data":"ed4d8d5af98603a3619b2a910ba3cde61f55a4975477ee93cf98b7122d021c9b"} Sep 30 18:56:57 crc kubenswrapper[4688]: I0930 18:56:57.758448 4688 generic.go:334] "Generic (PLEG): container finished" podID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerID="ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6" exitCode=0 Sep 30 18:56:57 crc kubenswrapper[4688]: I0930 18:56:57.758505 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerDied","Data":"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6"} Sep 30 18:56:58 crc kubenswrapper[4688]: I0930 18:56:58.771286 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerStarted","Data":"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716"} Sep 30 18:56:58 crc kubenswrapper[4688]: I0930 18:56:58.799655 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nt5nz" podStartSLOduration=2.370954831 podStartE2EDuration="4.79963309s" podCreationTimestamp="2025-09-30 18:56:54 +0000 UTC" firstStartedPulling="2025-09-30 18:56:55.743226022 +0000 UTC m=+6073.038663580" lastFinishedPulling="2025-09-30 18:56:58.171904281 +0000 UTC m=+6075.467341839" observedRunningTime="2025-09-30 18:56:58.788749248 +0000 UTC m=+6076.084186816" watchObservedRunningTime="2025-09-30 18:56:58.79963309 +0000 UTC m=+6076.095070648" Sep 30 18:57:01 crc kubenswrapper[4688]: I0930 18:57:01.802137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/must-gather-46dzk" event={"ID":"657ce608-1270-43be-98bd-4f1705b45916","Type":"ContainerStarted","Data":"ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111"} Sep 30 18:57:01 crc kubenswrapper[4688]: I0930 18:57:01.802964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/must-gather-46dzk" event={"ID":"657ce608-1270-43be-98bd-4f1705b45916","Type":"ContainerStarted","Data":"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1"} Sep 30 18:57:01 crc kubenswrapper[4688]: I0930 18:57:01.819582 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5qb/must-gather-46dzk" podStartSLOduration=2.951180427 podStartE2EDuration="6.819544073s" podCreationTimestamp="2025-09-30 18:56:55 +0000 UTC" firstStartedPulling="2025-09-30 18:56:56.828989558 +0000 UTC m=+6074.124427116" lastFinishedPulling="2025-09-30 18:57:00.697353204 +0000 UTC m=+6077.992790762" observedRunningTime="2025-09-30 18:57:01.816312209 +0000 UTC m=+6079.111749777" watchObservedRunningTime="2025-09-30 18:57:01.819544073 +0000 UTC m=+6079.114981641" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.374720 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n42xn"] Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.376341 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.459242 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.459642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96mv\" (UniqueName: \"kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.561349 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.561552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96mv\" (UniqueName: \"kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.562225 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.584620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96mv\" (UniqueName: \"kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv\") pod \"crc-debug-n42xn\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.692875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.830327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" event={"ID":"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4","Type":"ContainerStarted","Data":"3fad78eb05a25ad2e393e6771bb451da697761820dd6287af5fdf49ed8de3cbb"} Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.944901 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.944963 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:04 crc kubenswrapper[4688]: I0930 18:57:04.994845 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:05 crc kubenswrapper[4688]: I0930 18:57:05.882187 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:05 crc kubenswrapper[4688]: I0930 18:57:05.925179 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:57:07 crc kubenswrapper[4688]: I0930 18:57:07.855466 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nt5nz" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="registry-server" containerID="cri-o://990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716" gracePeriod=2 Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.370659 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.431004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities\") pod \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.431094 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content\") pod \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.431220 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kdbp\" (UniqueName: \"kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp\") pod \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\" (UID: \"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1\") " Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.433225 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities" (OuterVolumeSpecName: "utilities") pod "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" (UID: "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.437231 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp" (OuterVolumeSpecName: "kube-api-access-6kdbp") pod "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" (UID: "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1"). InnerVolumeSpecName "kube-api-access-6kdbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.446714 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" (UID: "13d458c5-a0c4-45d4-84ce-d4d1570f0cf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.533246 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kdbp\" (UniqueName: \"kubernetes.io/projected/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-kube-api-access-6kdbp\") on node \"crc\" DevicePath \"\"" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.533286 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.533301 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.867141 4688 generic.go:334] "Generic (PLEG): container finished" podID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerID="990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716" exitCode=0 Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.867182 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerDied","Data":"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716"} Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.867427 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nt5nz" event={"ID":"13d458c5-a0c4-45d4-84ce-d4d1570f0cf1","Type":"ContainerDied","Data":"b59698c7f8b7c73dbb96045fe35b1d973c2f9b22b53fb7e37f4de93cf186803c"} Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.867449 4688 scope.go:117] "RemoveContainer" containerID="990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.867235 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nt5nz" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.902031 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.909312 4688 scope.go:117] "RemoveContainer" containerID="ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6" Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.914947 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nt5nz"] Sep 30 18:57:08 crc kubenswrapper[4688]: I0930 18:57:08.936633 4688 scope.go:117] "RemoveContainer" containerID="fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.002671 4688 scope.go:117] "RemoveContainer" containerID="990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716" Sep 30 18:57:09 crc kubenswrapper[4688]: E0930 18:57:09.003215 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716\": container with ID starting with 990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716 not found: ID does not exist" containerID="990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.003243 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716"} err="failed to get container status \"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716\": rpc error: code = NotFound desc = could not find container \"990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716\": container with ID starting with 990ecd0ac8974f95840ae1ff3cb3bc8e238c2eb626dc114ab6c192ee25fbb716 not found: ID does not exist" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.003264 4688 scope.go:117] "RemoveContainer" containerID="ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6" Sep 30 18:57:09 crc kubenswrapper[4688]: E0930 18:57:09.003519 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6\": container with ID starting with ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6 not found: ID does not exist" containerID="ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.003543 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6"} err="failed to get container status \"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6\": rpc error: code = NotFound desc = could not find container \"ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6\": container with ID starting with ee9492f993ed76ea18815fbf88a683704f162b7cd84c75c4317c42af879e3fb6 not found: ID does not exist" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.003579 4688 scope.go:117] "RemoveContainer" containerID="fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4" Sep 30 18:57:09 crc kubenswrapper[4688]: E0930 18:57:09.003970 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4\": container with ID starting with fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4 not found: ID does not exist" containerID="fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.003997 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4"} err="failed to get container status \"fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4\": rpc error: code = NotFound desc = could not find container \"fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4\": container with ID starting with fda38a1c062834f414fa278895baf537170ea8412de7ae54485aa293e33590a4 not found: ID does not exist" Sep 30 18:57:09 crc kubenswrapper[4688]: I0930 18:57:09.450428 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" path="/var/lib/kubelet/pods/13d458c5-a0c4-45d4-84ce-d4d1570f0cf1/volumes" Sep 30 18:57:16 crc kubenswrapper[4688]: I0930 18:57:16.935268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" event={"ID":"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4","Type":"ContainerStarted","Data":"f5d485dbac0a7fa45ff0cdcaf21361a4fde186b2c180308c370fe77ef3440355"} Sep 30 18:57:16 crc kubenswrapper[4688]: I0930 18:57:16.952686 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" podStartSLOduration=1.394821072 podStartE2EDuration="12.952669867s" podCreationTimestamp="2025-09-30 18:57:04 +0000 UTC" firstStartedPulling="2025-09-30 18:57:04.747419103 +0000 UTC m=+6082.042856661" lastFinishedPulling="2025-09-30 18:57:16.305267898 +0000 UTC m=+6093.600705456" observedRunningTime="2025-09-30 18:57:16.946554119 +0000 UTC m=+6094.241991677" watchObservedRunningTime="2025-09-30 18:57:16.952669867 +0000 UTC m=+6094.248107425" Sep 30 18:57:52 crc kubenswrapper[4688]: I0930 18:57:52.546196 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:57:52 crc kubenswrapper[4688]: I0930 18:57:52.546824 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:58:02 crc kubenswrapper[4688]: I0930 18:58:02.883679 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f77664db-t4vhn_4f423e22-c7d3-4385-82a5-dca95aef82a0/barbican-api/0.log" Sep 30 18:58:02 crc kubenswrapper[4688]: I0930 18:58:02.937381 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f77664db-t4vhn_4f423e22-c7d3-4385-82a5-dca95aef82a0/barbican-api-log/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.126123 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bdbc5f8bb-9xltz_8f4dccc2-3d97-4492-acb8-a6e080810069/barbican-keystone-listener/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.175923 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bdbc5f8bb-9xltz_8f4dccc2-3d97-4492-acb8-a6e080810069/barbican-keystone-listener-log/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.381743 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66df74b4fc-5jlzp_69fd9ea3-7c23-450d-b3e9-b5bb57e4f324/barbican-worker/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.399871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66df74b4fc-5jlzp_69fd9ea3-7c23-450d-b3e9-b5bb57e4f324/barbican-worker-log/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.629231 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d_3002489d-18d9-4ab0-b09e-ab37e97ec796/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.820008 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/ceilometer-central-agent/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.847318 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/ceilometer-notification-agent/0.log" Sep 30 18:58:03 crc kubenswrapper[4688]: I0930 18:58:03.897479 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/proxy-httpd/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.009048 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/sg-core/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.166722 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1527bf09-6ef0-4fda-a749-9f6440f17159/cinder-api/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.261224 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1527bf09-6ef0-4fda-a749-9f6440f17159/cinder-api-log/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.358061 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8344d27c-987c-4c41-8491-9856f5b93c3f/cinder-scheduler/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.495498 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8344d27c-987c-4c41-8491-9856f5b93c3f/probe/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.650453 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z475s_4b7f1d4c-3151-4220-87e0-84fb8dff5710/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.864026 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f64rr_f8b375f4-25b5-42f4-9b62-763fd31e3404/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:04 crc kubenswrapper[4688]: I0930 18:58:04.941792 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qcftq_03ed4997-f615-4267-9738-7cd2fd9a0cb9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.135944 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/init/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.282195 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/init/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.438601 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/dnsmasq-dns/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.573387 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h8grj_7f941024-be27-4d1b-8fa3-3d8f1ff434ba/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.647239 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4/glance-httpd/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.809905 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4/glance-log/0.log" Sep 30 18:58:05 crc kubenswrapper[4688]: I0930 18:58:05.938033 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_59dd37af-4be8-4c5c-900f-efbd6c09fcf7/glance-httpd/0.log" Sep 30 18:58:06 crc kubenswrapper[4688]: I0930 18:58:06.006671 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_59dd37af-4be8-4c5c-900f-efbd6c09fcf7/glance-log/0.log" Sep 30 18:58:06 crc kubenswrapper[4688]: I0930 18:58:06.258496 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dddb98558-nts96_c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c/horizon/0.log" Sep 30 18:58:06 crc kubenswrapper[4688]: I0930 18:58:06.550013 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-d7wch_faf29e19-4d73-430a-937d-2b2b02ef6d70/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:06 crc kubenswrapper[4688]: I0930 18:58:06.661954 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-64mcs_a39b16b2-47a0-4014-a6d5-286fb2daa1a2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:06 crc kubenswrapper[4688]: I0930 18:58:06.824220 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dddb98558-nts96_c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c/horizon-log/0.log" Sep 30 18:58:07 crc kubenswrapper[4688]: I0930 18:58:07.067562 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29320921-spfn9_38f66d5e-b38e-480b-8537-befc7512cc9e/keystone-cron/0.log" Sep 30 18:58:07 crc kubenswrapper[4688]: I0930 18:58:07.153444 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2feddb6b-68dd-45da-aea7-ce4061c8b067/kube-state-metrics/0.log" Sep 30 18:58:07 crc kubenswrapper[4688]: I0930 18:58:07.415444 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wghcn_b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:07 crc kubenswrapper[4688]: I0930 18:58:07.447919 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cb94dd69b-49qxn_6b52ebb3-8313-4e9d-9227-bd89bf88639d/keystone-api/0.log" Sep 30 18:58:08 crc kubenswrapper[4688]: I0930 18:58:08.063073 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d8684c999-h2g4c_fcb14b73-4266-4adf-a6cf-649855edb179/neutron-httpd/0.log" Sep 30 18:58:08 crc kubenswrapper[4688]: I0930 18:58:08.252840 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d8684c999-h2g4c_fcb14b73-4266-4adf-a6cf-649855edb179/neutron-api/0.log" Sep 30 18:58:08 crc kubenswrapper[4688]: I0930 18:58:08.352885 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd_a36a505a-e276-4557-b4a5-dab7af0803fd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:09 crc kubenswrapper[4688]: I0930 18:58:09.549156 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d73b96c1-ad6b-4c56-a8db-4d4e69ff323e/nova-cell0-conductor-conductor/0.log" Sep 30 18:58:10 crc kubenswrapper[4688]: I0930 18:58:10.174180 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_357c4a92-d57a-4c57-9cee-7a98de90edff/nova-cell1-conductor-conductor/0.log" Sep 30 18:58:10 crc kubenswrapper[4688]: I0930 18:58:10.578258 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_61afa819-98c6-408a-b62b-9098624f6e38/nova-api-log/0.log" Sep 30 18:58:10 crc kubenswrapper[4688]: I0930 18:58:10.800489 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ba1f86f8-c725-4de3-89f9-4aed0b031859/nova-cell1-novncproxy-novncproxy/0.log" Sep 30 18:58:11 crc kubenswrapper[4688]: I0930 18:58:11.205205 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-62pl5_82afbeb4-8c01-47b4-a07c-b9d1fd405e39/nova-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:11 crc kubenswrapper[4688]: I0930 18:58:11.269069 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_61afa819-98c6-408a-b62b-9098624f6e38/nova-api-api/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.062597 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d423914e-2c66-43fb-984f-8936b0c1e5af/nova-metadata-log/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.122207 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4967d3fb-572e-4d96-85f2-7f3ef31acc3b/nova-scheduler-scheduler/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.415751 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/mysql-bootstrap/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.610967 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/mysql-bootstrap/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.693024 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/galera/0.log" Sep 30 18:58:12 crc kubenswrapper[4688]: I0930 18:58:12.990530 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/mysql-bootstrap/0.log" Sep 30 18:58:13 crc kubenswrapper[4688]: I0930 18:58:13.254828 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/mysql-bootstrap/0.log" Sep 30 18:58:13 crc kubenswrapper[4688]: I0930 18:58:13.275782 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/galera/0.log" Sep 30 18:58:13 crc kubenswrapper[4688]: I0930 18:58:13.497314 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ec99c41-9858-45f2-84b9-c3fd05deb7fd/openstackclient/0.log" Sep 30 18:58:13 crc kubenswrapper[4688]: I0930 18:58:13.821227 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s67vw_263d16ad-0943-47a4-99f5-4b56dd223472/openstack-network-exporter/0.log" Sep 30 18:58:14 crc kubenswrapper[4688]: I0930 18:58:14.067065 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server-init/0.log" Sep 30 18:58:14 crc kubenswrapper[4688]: I0930 18:58:14.299997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server-init/0.log" Sep 30 18:58:14 crc kubenswrapper[4688]: I0930 18:58:14.310819 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovs-vswitchd/0.log" Sep 30 18:58:14 crc kubenswrapper[4688]: I0930 18:58:14.549126 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server/0.log" Sep 30 18:58:14 crc kubenswrapper[4688]: I0930 18:58:14.749828 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-v6d9x_a5a55636-8e3a-49bf-8c31-2980f614fd65/ovn-controller/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.007914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qjw26_01be6ded-891d-4d74-9af3-2adcc0d9ebb4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.215275 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9a63408-631b-4f20-a556-39bb5835e71e/openstack-network-exporter/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.326100 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9a63408-631b-4f20-a556-39bb5835e71e/ovn-northd/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.418325 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d423914e-2c66-43fb-984f-8936b0c1e5af/nova-metadata-metadata/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.587680 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de/openstack-network-exporter/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.687689 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de/ovsdbserver-nb/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.790272 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44203c2b-b94c-41a5-908c-a79aa2790bfc/openstack-network-exporter/0.log" Sep 30 18:58:15 crc kubenswrapper[4688]: I0930 18:58:15.894827 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44203c2b-b94c-41a5-908c-a79aa2790bfc/ovsdbserver-sb/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.266475 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67b5cc9656-9lxh4_27f236ad-c05c-4f2e-b68d-d6b867c456f8/placement-api/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.478925 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67b5cc9656-9lxh4_27f236ad-c05c-4f2e-b68d-d6b867c456f8/placement-log/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.514940 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/setup-container/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.722851 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/setup-container/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.737689 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/rabbitmq/0.log" Sep 30 18:58:16 crc kubenswrapper[4688]: I0930 18:58:16.892787 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/setup-container/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.075206 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/setup-container/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.138715 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/rabbitmq/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.288713 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45_b95ac163-c6b0-4fa0-a04c-5d4e0389c238/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.436897 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zqw69_f33c7eae-02eb-440e-ac3a-f16f098efd47/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.552005 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc_e67f930b-db0b-4d29-b9e5-5a259d6a9534/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.678656 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-66ftv_857e37c3-03d7-434d-80d4-6ee03574499e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:17 crc kubenswrapper[4688]: I0930 18:58:17.873028 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lb96p_386f214d-f18b-475f-828b-9fb63a356714/ssh-known-hosts-edpm-deployment/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.126404 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-647757d8c5-6pzkp_62277faa-c7c7-4151-b35f-f3ebe78d3ffe/proxy-server/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.285207 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-647757d8c5-6pzkp_62277faa-c7c7-4151-b35f-f3ebe78d3ffe/proxy-httpd/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.381963 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9rwlk_d8f8de37-9747-4c9d-b61b-f89ad32ba2fd/swift-ring-rebalance/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.494294 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-auditor/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.653246 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-reaper/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.733759 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-server/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.765380 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-replicator/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.857129 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-auditor/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.953754 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-replicator/0.log" Sep 30 18:58:18 crc kubenswrapper[4688]: I0930 18:58:18.958780 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-server/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.064532 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-updater/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.167873 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-auditor/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.232052 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-expirer/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.262066 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-replicator/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.406751 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-updater/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.412698 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-server/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.489390 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/rsync/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.603626 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/swift-recon-cron/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.737205 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p_ffd57ee0-1383-4acb-8892-b5d00ae71ea4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:19 crc kubenswrapper[4688]: I0930 18:58:19.836709 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq_4bb29aec-2e71-4505-b1d0-e79b6aaecceb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 18:58:22 crc kubenswrapper[4688]: I0930 18:58:22.546050 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:58:22 crc kubenswrapper[4688]: I0930 18:58:22.546678 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:58:23 crc kubenswrapper[4688]: I0930 18:58:23.723154 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1988f233-6da1-4c36-acbc-33f7dbaab1b0/memcached/0.log" Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.545957 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.546551 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.546639 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.547610 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.547719 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5" gracePeriod=600 Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.850551 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5" exitCode=0 Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.851760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5"} Sep 30 18:58:52 crc kubenswrapper[4688]: I0930 18:58:52.851872 4688 scope.go:117] "RemoveContainer" containerID="934efd03968d7292a589b49f7ebc1fbb98b5e55119d7847bf8862a3d8045d834" Sep 30 18:58:53 crc kubenswrapper[4688]: I0930 18:58:53.863878 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35"} Sep 30 18:59:19 crc kubenswrapper[4688]: I0930 18:59:19.108819 4688 generic.go:334] "Generic (PLEG): container finished" podID="ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" containerID="f5d485dbac0a7fa45ff0cdcaf21361a4fde186b2c180308c370fe77ef3440355" exitCode=0 Sep 30 18:59:19 crc kubenswrapper[4688]: I0930 18:59:19.108962 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" event={"ID":"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4","Type":"ContainerDied","Data":"f5d485dbac0a7fa45ff0cdcaf21361a4fde186b2c180308c370fe77ef3440355"} Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.247587 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.277627 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n42xn"] Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.286592 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n42xn"] Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.418031 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96mv\" (UniqueName: \"kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv\") pod \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.418484 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host\") pod \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\" (UID: \"ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4\") " Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.418637 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host" (OuterVolumeSpecName: "host") pod "ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" (UID: "ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.419139 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-host\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.426598 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv" (OuterVolumeSpecName: "kube-api-access-f96mv") pod "ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" (UID: "ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4"). InnerVolumeSpecName "kube-api-access-f96mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:59:20 crc kubenswrapper[4688]: I0930 18:59:20.522425 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96mv\" (UniqueName: \"kubernetes.io/projected/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4-kube-api-access-f96mv\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.139022 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fad78eb05a25ad2e393e6771bb451da697761820dd6287af5fdf49ed8de3cbb" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.139097 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n42xn" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.447903 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" path="/var/lib/kubelet/pods/ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4/volumes" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.449306 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-jkcwm"] Sep 30 18:59:21 crc kubenswrapper[4688]: E0930 18:59:21.449883 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" containerName="container-00" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.449912 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" containerName="container-00" Sep 30 18:59:21 crc kubenswrapper[4688]: E0930 18:59:21.449938 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="registry-server" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.449953 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="registry-server" Sep 30 18:59:21 crc kubenswrapper[4688]: E0930 18:59:21.449978 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="extract-utilities" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.449991 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="extract-utilities" Sep 30 18:59:21 crc kubenswrapper[4688]: E0930 18:59:21.450025 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="extract-content" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.450037 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="extract-content" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.450427 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d458c5-a0c4-45d4-84ce-d4d1570f0cf1" containerName="registry-server" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.450466 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0a8938-6ff2-45f8-b4a0-2a1a66d26ad4" containerName="container-00" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.451835 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.644422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.644518 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngwh\" (UniqueName: \"kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.747028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.747188 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngwh\" (UniqueName: \"kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.747232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.767188 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngwh\" (UniqueName: \"kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh\") pod \"crc-debug-jkcwm\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: I0930 18:59:21.776014 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:21 crc kubenswrapper[4688]: W0930 18:59:21.844236 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef4fcc5_4a81_4350_a605_68c731bb107a.slice/crio-a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a WatchSource:0}: Error finding container a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a: Status 404 returned error can't find the container with id a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a Sep 30 18:59:22 crc kubenswrapper[4688]: I0930 18:59:22.153776 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" event={"ID":"6ef4fcc5-4a81-4350-a605-68c731bb107a","Type":"ContainerStarted","Data":"0e5a39d53f1c3cf4b205a79b0221080351e51afd6d3d919b9ae47967f54ae9f9"} Sep 30 18:59:22 crc kubenswrapper[4688]: I0930 18:59:22.153875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" event={"ID":"6ef4fcc5-4a81-4350-a605-68c731bb107a","Type":"ContainerStarted","Data":"a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a"} Sep 30 18:59:22 crc kubenswrapper[4688]: I0930 18:59:22.177421 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" podStartSLOduration=1.177398854 podStartE2EDuration="1.177398854s" podCreationTimestamp="2025-09-30 18:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 18:59:22.170867095 +0000 UTC m=+6219.466304663" watchObservedRunningTime="2025-09-30 18:59:22.177398854 +0000 UTC m=+6219.472836422" Sep 30 18:59:23 crc kubenswrapper[4688]: I0930 18:59:23.169813 4688 generic.go:334] "Generic (PLEG): container finished" podID="6ef4fcc5-4a81-4350-a605-68c731bb107a" containerID="0e5a39d53f1c3cf4b205a79b0221080351e51afd6d3d919b9ae47967f54ae9f9" exitCode=0 Sep 30 18:59:23 crc kubenswrapper[4688]: I0930 18:59:23.169899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" event={"ID":"6ef4fcc5-4a81-4350-a605-68c731bb107a","Type":"ContainerDied","Data":"0e5a39d53f1c3cf4b205a79b0221080351e51afd6d3d919b9ae47967f54ae9f9"} Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.293586 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.397966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host\") pod \"6ef4fcc5-4a81-4350-a605-68c731bb107a\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.398066 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kngwh\" (UniqueName: \"kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh\") pod \"6ef4fcc5-4a81-4350-a605-68c731bb107a\" (UID: \"6ef4fcc5-4a81-4350-a605-68c731bb107a\") " Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.398297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host" (OuterVolumeSpecName: "host") pod "6ef4fcc5-4a81-4350-a605-68c731bb107a" (UID: "6ef4fcc5-4a81-4350-a605-68c731bb107a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.398618 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef4fcc5-4a81-4350-a605-68c731bb107a-host\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.404898 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh" (OuterVolumeSpecName: "kube-api-access-kngwh") pod "6ef4fcc5-4a81-4350-a605-68c731bb107a" (UID: "6ef4fcc5-4a81-4350-a605-68c731bb107a"). InnerVolumeSpecName "kube-api-access-kngwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:59:24 crc kubenswrapper[4688]: I0930 18:59:24.499859 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kngwh\" (UniqueName: \"kubernetes.io/projected/6ef4fcc5-4a81-4350-a605-68c731bb107a-kube-api-access-kngwh\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:25 crc kubenswrapper[4688]: I0930 18:59:25.186648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" event={"ID":"6ef4fcc5-4a81-4350-a605-68c731bb107a","Type":"ContainerDied","Data":"a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a"} Sep 30 18:59:25 crc kubenswrapper[4688]: I0930 18:59:25.186693 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a619b3c59abf9a5c8913755f37978d1f7c2ce78eceffe4162fc1618562079e3a" Sep 30 18:59:25 crc kubenswrapper[4688]: I0930 18:59:25.186732 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-jkcwm" Sep 30 18:59:31 crc kubenswrapper[4688]: I0930 18:59:31.864513 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-jkcwm"] Sep 30 18:59:31 crc kubenswrapper[4688]: I0930 18:59:31.871109 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-jkcwm"] Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.049290 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n4rj7"] Sep 30 18:59:33 crc kubenswrapper[4688]: E0930 18:59:33.049973 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef4fcc5-4a81-4350-a605-68c731bb107a" containerName="container-00" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.049988 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef4fcc5-4a81-4350-a605-68c731bb107a" containerName="container-00" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.050237 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef4fcc5-4a81-4350-a605-68c731bb107a" containerName="container-00" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.050941 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.243644 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdcm\" (UniqueName: \"kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.243921 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.345972 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdcm\" (UniqueName: \"kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.346176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.346267 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.386912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdcm\" (UniqueName: \"kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm\") pod \"crc-debug-n4rj7\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.449112 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef4fcc5-4a81-4350-a605-68c731bb107a" path="/var/lib/kubelet/pods/6ef4fcc5-4a81-4350-a605-68c731bb107a/volumes" Sep 30 18:59:33 crc kubenswrapper[4688]: I0930 18:59:33.673124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:34 crc kubenswrapper[4688]: I0930 18:59:34.274462 4688 generic.go:334] "Generic (PLEG): container finished" podID="c63f22d9-52a3-45b1-a2b1-1da1d0d65255" containerID="97ec863d3fc54f4bf3dfe2a75da09856238db1f6402ac301602fc00230e9a6aa" exitCode=0 Sep 30 18:59:34 crc kubenswrapper[4688]: I0930 18:59:34.274609 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" event={"ID":"c63f22d9-52a3-45b1-a2b1-1da1d0d65255","Type":"ContainerDied","Data":"97ec863d3fc54f4bf3dfe2a75da09856238db1f6402ac301602fc00230e9a6aa"} Sep 30 18:59:34 crc kubenswrapper[4688]: I0930 18:59:34.274883 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" event={"ID":"c63f22d9-52a3-45b1-a2b1-1da1d0d65255","Type":"ContainerStarted","Data":"ade3ed421b5810bf5415c1b4a4461d1d44c7d00e788fcd948e2e6716092f9c1c"} Sep 30 18:59:34 crc kubenswrapper[4688]: I0930 18:59:34.313749 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n4rj7"] Sep 30 18:59:34 crc kubenswrapper[4688]: I0930 18:59:34.322803 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5qb/crc-debug-n4rj7"] Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.395794 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.589334 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fdcm\" (UniqueName: \"kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm\") pod \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.589429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host\") pod \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\" (UID: \"c63f22d9-52a3-45b1-a2b1-1da1d0d65255\") " Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.589532 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host" (OuterVolumeSpecName: "host") pod "c63f22d9-52a3-45b1-a2b1-1da1d0d65255" (UID: "c63f22d9-52a3-45b1-a2b1-1da1d0d65255"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.589889 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-host\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.602292 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm" (OuterVolumeSpecName: "kube-api-access-2fdcm") pod "c63f22d9-52a3-45b1-a2b1-1da1d0d65255" (UID: "c63f22d9-52a3-45b1-a2b1-1da1d0d65255"). InnerVolumeSpecName "kube-api-access-2fdcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.691310 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fdcm\" (UniqueName: \"kubernetes.io/projected/c63f22d9-52a3-45b1-a2b1-1da1d0d65255-kube-api-access-2fdcm\") on node \"crc\" DevicePath \"\"" Sep 30 18:59:35 crc kubenswrapper[4688]: I0930 18:59:35.848380 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.016325 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.031120 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.045720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.284009 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.284794 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.291252 4688 scope.go:117] "RemoveContainer" containerID="97ec863d3fc54f4bf3dfe2a75da09856238db1f6402ac301602fc00230e9a6aa" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.291284 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/crc-debug-n4rj7" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.353156 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/extract/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.503930 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-zrrc8_b4b7968e-2dbf-463c-addf-f79bb82c5fb3/kube-rbac-proxy/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.622757 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-zrrc8_b4b7968e-2dbf-463c-addf-f79bb82c5fb3/manager/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.630837 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6ng88_ef1716f0-3b35-433b-8ad9-5127c151a43f/kube-rbac-proxy/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.717312 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6ng88_ef1716f0-3b35-433b-8ad9-5127c151a43f/manager/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.790885 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-xnhgk_08d7f313-71b6-4d2e-b67d-c9e1ddaca02d/kube-rbac-proxy/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.867106 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-xnhgk_08d7f313-71b6-4d2e-b67d-c9e1ddaca02d/manager/0.log" Sep 30 18:59:36 crc kubenswrapper[4688]: I0930 18:59:36.940082 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-98rhq_1bea0309-4386-4a05-a980-1104ec2e37c2/kube-rbac-proxy/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.249168 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-98rhq_1bea0309-4386-4a05-a980-1104ec2e37c2/manager/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.293642 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-vgshh_7c01591d-ed02-4a38-b0ec-560d9b41cc8c/kube-rbac-proxy/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.322535 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-vgshh_7c01591d-ed02-4a38-b0ec-560d9b41cc8c/manager/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.446986 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-bklgp_8a89103b-24b0-456d-88c5-f6abd68e75fb/kube-rbac-proxy/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.449294 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63f22d9-52a3-45b1-a2b1-1da1d0d65255" path="/var/lib/kubelet/pods/c63f22d9-52a3-45b1-a2b1-1da1d0d65255/volumes" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.504656 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-bklgp_8a89103b-24b0-456d-88c5-f6abd68e75fb/manager/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.640472 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-zslc4_8eba803d-c206-4418-a0b6-8594af2670aa/kube-rbac-proxy/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.716249 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-7l8bg_940175df-0d37-4540-b0c7-a81bc3261ba6/kube-rbac-proxy/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.785649 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-zslc4_8eba803d-c206-4418-a0b6-8594af2670aa/manager/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.840860 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-7l8bg_940175df-0d37-4540-b0c7-a81bc3261ba6/manager/0.log" Sep 30 18:59:37 crc kubenswrapper[4688]: I0930 18:59:37.923607 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-t64tg_505c7e1c-5b6c-4736-8a1f-93ed5a92b47e/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.037157 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-t64tg_505c7e1c-5b6c-4736-8a1f-93ed5a92b47e/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.146068 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-7tgsx_398df167-3c2c-416f-b77c-baafa00ce747/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.150769 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-7tgsx_398df167-3c2c-416f-b77c-baafa00ce747/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.228595 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-886sw_f9796897-79b7-4273-8f85-f3c064a465af/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.356599 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-886sw_f9796897-79b7-4273-8f85-f3c064a465af/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.397162 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-h8tzm_16f6e3cb-02d6-4820-953b-a728440b812f/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.454778 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-h8tzm_16f6e3cb-02d6-4820-953b-a728440b812f/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.556499 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-q5q9z_ef369140-3b22-4cd7-926f-525ebde98f1a/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.680293 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-q5q9z_ef369140-3b22-4cd7-926f-525ebde98f1a/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.787201 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-69zkg_f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.839093 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-69zkg_f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d/manager/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.940888 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-m24qn_145c6edc-16b0-47ec-846b-638e354aa1dd/kube-rbac-proxy/0.log" Sep 30 18:59:38 crc kubenswrapper[4688]: I0930 18:59:38.991061 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-m24qn_145c6edc-16b0-47ec-846b-638e354aa1dd/manager/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.058279 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-89bc4fb57-2x7jt_fc9e0382-183b-41ab-8890-9ad95c9ebba4/kube-rbac-proxy/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.241497 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77647d9b49-6c75r_0e55afb9-bd88-450e-bb6a-5cba368b3f05/kube-rbac-proxy/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.408547 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-f55sl_0d838e78-fbd6-46d0-a298-efa17c407500/registry-server/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.409101 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77647d9b49-6c75r_0e55afb9-bd88-450e-bb6a-5cba368b3f05/operator/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.653861 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-rd5sx_70e2e81b-be62-472a-8b70-d5aadace742e/kube-rbac-proxy/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.763072 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5ddfc99c65-m7slc_9e74b6a2-2c57-4514-9709-9db9b0a76d55/manager/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.771648 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5ddfc99c65-m7slc_9e74b6a2-2c57-4514-9709-9db9b0a76d55/kube-rbac-proxy/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.876220 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-rd5sx_70e2e81b-be62-472a-8b70-d5aadace742e/manager/0.log" Sep 30 18:59:39 crc kubenswrapper[4688]: I0930 18:59:39.995733 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-hxqmr_a01178d9-6b72-442e-b455-e5625b5e154e/operator/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.117180 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-wkxsn_0768b406-8010-4804-8769-634ee21211ed/manager/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.117960 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-wkxsn_0768b406-8010-4804-8769-634ee21211ed/kube-rbac-proxy/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.237855 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-g6d6b_e2803e31-92d8-4071-8a82-84249fc01718/kube-rbac-proxy/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.255948 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-89bc4fb57-2x7jt_fc9e0382-183b-41ab-8890-9ad95c9ebba4/manager/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.398832 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-g6d6b_e2803e31-92d8-4071-8a82-84249fc01718/manager/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.431804 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-cdx5c_26fa1afc-ed5f-4541-aab0-e003a11f653c/kube-rbac-proxy/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.432074 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-cdx5c_26fa1afc-ed5f-4541-aab0-e003a11f653c/manager/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.532382 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-f2nzh_19e2aa18-5fe3-4451-8ca1-303b87a5e131/kube-rbac-proxy/0.log" Sep 30 18:59:40 crc kubenswrapper[4688]: I0930 18:59:40.556529 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-f2nzh_19e2aa18-5fe3-4451-8ca1-303b87a5e131/manager/0.log" Sep 30 18:59:56 crc kubenswrapper[4688]: I0930 18:59:56.134459 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cfd84_6f650a2d-90e1-465d-bc6b-1451aa0099ac/control-plane-machine-set-operator/0.log" Sep 30 18:59:56 crc kubenswrapper[4688]: I0930 18:59:56.342682 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tfh56_8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda/machine-api-operator/0.log" Sep 30 18:59:56 crc kubenswrapper[4688]: I0930 18:59:56.355853 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tfh56_8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda/kube-rbac-proxy/0.log" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.166463 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9"] Sep 30 19:00:00 crc kubenswrapper[4688]: E0930 19:00:00.167528 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63f22d9-52a3-45b1-a2b1-1da1d0d65255" containerName="container-00" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.167546 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63f22d9-52a3-45b1-a2b1-1da1d0d65255" containerName="container-00" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.167825 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63f22d9-52a3-45b1-a2b1-1da1d0d65255" containerName="container-00" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.168691 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.170649 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.172684 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.176909 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9"] Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.278745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.278831 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtx2\" (UniqueName: \"kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.279673 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.381374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtx2\" (UniqueName: \"kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.381619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.381848 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.382833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.387412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.407014 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtx2\" (UniqueName: \"kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2\") pod \"collect-profiles-29320980-7fdj9\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.499664 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:00 crc kubenswrapper[4688]: I0930 19:00:00.923583 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9"] Sep 30 19:00:01 crc kubenswrapper[4688]: I0930 19:00:01.541457 4688 generic.go:334] "Generic (PLEG): container finished" podID="2e24e05a-b999-4902-951d-5d6b6b4a87bd" containerID="e5d1a8dde4cbdd88f19224a6b14d2881399b60ff75bbaabff7234c5f0dd09fc7" exitCode=0 Sep 30 19:00:01 crc kubenswrapper[4688]: I0930 19:00:01.541560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" event={"ID":"2e24e05a-b999-4902-951d-5d6b6b4a87bd","Type":"ContainerDied","Data":"e5d1a8dde4cbdd88f19224a6b14d2881399b60ff75bbaabff7234c5f0dd09fc7"} Sep 30 19:00:01 crc kubenswrapper[4688]: I0930 19:00:01.541739 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" event={"ID":"2e24e05a-b999-4902-951d-5d6b6b4a87bd","Type":"ContainerStarted","Data":"dc5f41692be5f3a67bfd306546059c055ced5ca4a1bb4a9a214b6e0ecdd9f05d"} Sep 30 19:00:02 crc kubenswrapper[4688]: I0930 19:00:02.938380 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.032789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume\") pod \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.033159 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtx2\" (UniqueName: \"kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2\") pod \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.033317 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume\") pod \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\" (UID: \"2e24e05a-b999-4902-951d-5d6b6b4a87bd\") " Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.033864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e24e05a-b999-4902-951d-5d6b6b4a87bd" (UID: "2e24e05a-b999-4902-951d-5d6b6b4a87bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.038910 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2" (OuterVolumeSpecName: "kube-api-access-7vtx2") pod "2e24e05a-b999-4902-951d-5d6b6b4a87bd" (UID: "2e24e05a-b999-4902-951d-5d6b6b4a87bd"). InnerVolumeSpecName "kube-api-access-7vtx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.053733 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e24e05a-b999-4902-951d-5d6b6b4a87bd" (UID: "2e24e05a-b999-4902-951d-5d6b6b4a87bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.136151 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e24e05a-b999-4902-951d-5d6b6b4a87bd-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.136194 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtx2\" (UniqueName: \"kubernetes.io/projected/2e24e05a-b999-4902-951d-5d6b6b4a87bd-kube-api-access-7vtx2\") on node \"crc\" DevicePath \"\"" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.136207 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e24e05a-b999-4902-951d-5d6b6b4a87bd-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.581351 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" event={"ID":"2e24e05a-b999-4902-951d-5d6b6b4a87bd","Type":"ContainerDied","Data":"dc5f41692be5f3a67bfd306546059c055ced5ca4a1bb4a9a214b6e0ecdd9f05d"} Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.581400 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5f41692be5f3a67bfd306546059c055ced5ca4a1bb4a9a214b6e0ecdd9f05d" Sep 30 19:00:03 crc kubenswrapper[4688]: I0930 19:00:03.581460 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320980-7fdj9" Sep 30 19:00:04 crc kubenswrapper[4688]: I0930 19:00:04.012978 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw"] Sep 30 19:00:04 crc kubenswrapper[4688]: I0930 19:00:04.022005 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320935-9gsrw"] Sep 30 19:00:05 crc kubenswrapper[4688]: I0930 19:00:05.442410 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6" path="/var/lib/kubelet/pods/4c9e8a8f-4769-4e25-85b8-10fe70f2c6f6/volumes" Sep 30 19:00:08 crc kubenswrapper[4688]: I0930 19:00:08.031824 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wgz84_c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15/cert-manager-controller/0.log" Sep 30 19:00:08 crc kubenswrapper[4688]: I0930 19:00:08.197171 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-jzzz7_f6514521-cd09-4a1e-b627-62e0703af4e6/cert-manager-cainjector/0.log" Sep 30 19:00:08 crc kubenswrapper[4688]: I0930 19:00:08.202769 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-tj7k4_cf0898fe-428e-49df-bafd-a45e3f38d476/cert-manager-webhook/0.log" Sep 30 19:00:18 crc kubenswrapper[4688]: I0930 19:00:18.836791 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-wpjgb_5c329119-f853-41a4-89de-0f5358bc030d/nmstate-console-plugin/0.log" Sep 30 19:00:19 crc kubenswrapper[4688]: I0930 19:00:19.013391 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-44mqf_3510490b-a2cb-45a6-9bce-e671a256cde6/nmstate-handler/0.log" Sep 30 19:00:19 crc kubenswrapper[4688]: I0930 19:00:19.016258 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-lzq79_b3c49134-fd6f-43f7-b097-32d0e1384e5f/kube-rbac-proxy/0.log" Sep 30 19:00:19 crc kubenswrapper[4688]: I0930 19:00:19.048753 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-lzq79_b3c49134-fd6f-43f7-b097-32d0e1384e5f/nmstate-metrics/0.log" Sep 30 19:00:19 crc kubenswrapper[4688]: I0930 19:00:19.212988 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-hmwrd_4e9693f8-bd8e-4ed6-8b39-0d7b926133c6/nmstate-operator/0.log" Sep 30 19:00:19 crc kubenswrapper[4688]: I0930 19:00:19.232807 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-j4lbr_5391ab76-cadc-44fb-b404-f1dd66d8f5f4/nmstate-webhook/0.log" Sep 30 19:00:31 crc kubenswrapper[4688]: I0930 19:00:31.668844 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-5l9vw_e73cf014-002c-4642-815a-8a77f2ec12ee/kube-rbac-proxy/0.log" Sep 30 19:00:31 crc kubenswrapper[4688]: I0930 19:00:31.795739 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-5l9vw_e73cf014-002c-4642-815a-8a77f2ec12ee/controller/0.log" Sep 30 19:00:31 crc kubenswrapper[4688]: I0930 19:00:31.869147 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.012557 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.033336 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.033498 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.044900 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.170217 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.197687 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.239632 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.252256 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.414029 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.418844 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/controller/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.456123 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.477859 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.622040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/kube-rbac-proxy/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.634318 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/frr-metrics/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.686950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/kube-rbac-proxy-frr/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.862383 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/reloader/0.log" Sep 30 19:00:32 crc kubenswrapper[4688]: I0930 19:00:32.905930 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-v9pk8_ce858a63-686b-4a1c-98a6-de91bb6992e1/frr-k8s-webhook-server/0.log" Sep 30 19:00:33 crc kubenswrapper[4688]: I0930 19:00:33.121521 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d94b69f58-xr5pv_dfdf8b45-1929-49f8-95ef-94751966a5c3/manager/0.log" Sep 30 19:00:33 crc kubenswrapper[4688]: I0930 19:00:33.283494 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85b8f7c4f8-d477v_0f7203cd-f438-40a7-b09c-210c6afa21ff/webhook-server/0.log" Sep 30 19:00:33 crc kubenswrapper[4688]: I0930 19:00:33.358040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfrp4_9936dd27-23ea-412c-98b4-dab61a77b58b/kube-rbac-proxy/0.log" Sep 30 19:00:33 crc kubenswrapper[4688]: I0930 19:00:33.951378 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfrp4_9936dd27-23ea-412c-98b4-dab61a77b58b/speaker/0.log" Sep 30 19:00:34 crc kubenswrapper[4688]: I0930 19:00:34.263147 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/frr/0.log" Sep 30 19:00:44 crc kubenswrapper[4688]: I0930 19:00:44.962243 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.144338 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.181221 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.196153 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.384683 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.384872 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.409442 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/extract/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.533233 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.706934 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.711557 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.712745 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.933084 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:00:45 crc kubenswrapper[4688]: I0930 19:00:45.938379 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.197147 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.363126 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.445857 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.446088 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.663556 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.678583 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.781089 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/registry-server/0.log" Sep 30 19:00:46 crc kubenswrapper[4688]: I0930 19:00:46.929683 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.131655 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.132714 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.144529 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.359114 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/extract/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.370608 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.371412 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.545373 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/registry-server/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.563290 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ddkjs_f4669830-b238-42f8-8824-9aced2d9ca0a/marketplace-operator/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.731672 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.891140 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.898663 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:00:47 crc kubenswrapper[4688]: I0930 19:00:47.916519 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.028904 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.052300 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.183888 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.261509 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/registry-server/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.410126 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.410315 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.415186 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.612984 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:00:48 crc kubenswrapper[4688]: I0930 19:00:48.631453 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:00:49 crc kubenswrapper[4688]: I0930 19:00:49.532535 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/registry-server/0.log" Sep 30 19:00:50 crc kubenswrapper[4688]: I0930 19:00:50.807767 4688 scope.go:117] "RemoveContainer" containerID="fc478b94ddb43327625b96dba539d35efdb5e207e529118affa64820d72f2d4e" Sep 30 19:00:52 crc kubenswrapper[4688]: I0930 19:00:52.545563 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:00:52 crc kubenswrapper[4688]: I0930 19:00:52.546004 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.158884 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29320981-tpstq"] Sep 30 19:01:00 crc kubenswrapper[4688]: E0930 19:01:00.160253 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e24e05a-b999-4902-951d-5d6b6b4a87bd" containerName="collect-profiles" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.160284 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e24e05a-b999-4902-951d-5d6b6b4a87bd" containerName="collect-profiles" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.160751 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e24e05a-b999-4902-951d-5d6b6b4a87bd" containerName="collect-profiles" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.161921 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.172785 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29320981-tpstq"] Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.312912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.312966 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddh9d\" (UniqueName: \"kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.312994 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.313744 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.416134 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.416197 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddh9d\" (UniqueName: \"kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.416249 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.416333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.422251 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.423026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.439497 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.439681 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddh9d\" (UniqueName: \"kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d\") pod \"keystone-cron-29320981-tpstq\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:00 crc kubenswrapper[4688]: I0930 19:01:00.493095 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:01 crc kubenswrapper[4688]: I0930 19:01:01.064478 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29320981-tpstq"] Sep 30 19:01:01 crc kubenswrapper[4688]: I0930 19:01:01.108373 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320981-tpstq" event={"ID":"ce838482-0a4d-4150-8704-7f041de6c823","Type":"ContainerStarted","Data":"c0de608988cf5a25dab67fd7c5009a92257c6f9b29d7cba771ad0855115c00e3"} Sep 30 19:01:02 crc kubenswrapper[4688]: I0930 19:01:02.165082 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320981-tpstq" event={"ID":"ce838482-0a4d-4150-8704-7f041de6c823","Type":"ContainerStarted","Data":"badc111476960c5305e4f45039ab5bf1cf67bcd36f5ccbe4ce13e4178d6ef829"} Sep 30 19:01:02 crc kubenswrapper[4688]: I0930 19:01:02.209117 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29320981-tpstq" podStartSLOduration=2.20909484 podStartE2EDuration="2.20909484s" podCreationTimestamp="2025-09-30 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 19:01:02.202762016 +0000 UTC m=+6319.498199584" watchObservedRunningTime="2025-09-30 19:01:02.20909484 +0000 UTC m=+6319.504532398" Sep 30 19:01:03 crc kubenswrapper[4688]: E0930 19:01:03.915634 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce838482_0a4d_4150_8704_7f041de6c823.slice/crio-conmon-badc111476960c5305e4f45039ab5bf1cf67bcd36f5ccbe4ce13e4178d6ef829.scope\": RecentStats: unable to find data in memory cache]" Sep 30 19:01:04 crc kubenswrapper[4688]: I0930 19:01:04.184170 4688 generic.go:334] "Generic (PLEG): container finished" podID="ce838482-0a4d-4150-8704-7f041de6c823" containerID="badc111476960c5305e4f45039ab5bf1cf67bcd36f5ccbe4ce13e4178d6ef829" exitCode=0 Sep 30 19:01:04 crc kubenswrapper[4688]: I0930 19:01:04.184245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320981-tpstq" event={"ID":"ce838482-0a4d-4150-8704-7f041de6c823","Type":"ContainerDied","Data":"badc111476960c5305e4f45039ab5bf1cf67bcd36f5ccbe4ce13e4178d6ef829"} Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.614623 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.721375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data\") pod \"ce838482-0a4d-4150-8704-7f041de6c823\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.721789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddh9d\" (UniqueName: \"kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d\") pod \"ce838482-0a4d-4150-8704-7f041de6c823\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.721835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys\") pod \"ce838482-0a4d-4150-8704-7f041de6c823\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.721873 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle\") pod \"ce838482-0a4d-4150-8704-7f041de6c823\" (UID: \"ce838482-0a4d-4150-8704-7f041de6c823\") " Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.728782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d" (OuterVolumeSpecName: "kube-api-access-ddh9d") pod "ce838482-0a4d-4150-8704-7f041de6c823" (UID: "ce838482-0a4d-4150-8704-7f041de6c823"). InnerVolumeSpecName "kube-api-access-ddh9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.736857 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ce838482-0a4d-4150-8704-7f041de6c823" (UID: "ce838482-0a4d-4150-8704-7f041de6c823"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.761622 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce838482-0a4d-4150-8704-7f041de6c823" (UID: "ce838482-0a4d-4150-8704-7f041de6c823"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.782279 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data" (OuterVolumeSpecName: "config-data") pod "ce838482-0a4d-4150-8704-7f041de6c823" (UID: "ce838482-0a4d-4150-8704-7f041de6c823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.824236 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddh9d\" (UniqueName: \"kubernetes.io/projected/ce838482-0a4d-4150-8704-7f041de6c823-kube-api-access-ddh9d\") on node \"crc\" DevicePath \"\"" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.824275 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.824288 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 19:01:05 crc kubenswrapper[4688]: I0930 19:01:05.824299 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce838482-0a4d-4150-8704-7f041de6c823-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 19:01:06 crc kubenswrapper[4688]: I0930 19:01:06.205264 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29320981-tpstq" event={"ID":"ce838482-0a4d-4150-8704-7f041de6c823","Type":"ContainerDied","Data":"c0de608988cf5a25dab67fd7c5009a92257c6f9b29d7cba771ad0855115c00e3"} Sep 30 19:01:06 crc kubenswrapper[4688]: I0930 19:01:06.205307 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0de608988cf5a25dab67fd7c5009a92257c6f9b29d7cba771ad0855115c00e3" Sep 30 19:01:06 crc kubenswrapper[4688]: I0930 19:01:06.205367 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29320981-tpstq" Sep 30 19:01:20 crc kubenswrapper[4688]: E0930 19:01:20.184488 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.192:35888->38.129.56.192:40727: write tcp 38.129.56.192:35888->38.129.56.192:40727: write: broken pipe Sep 30 19:01:22 crc kubenswrapper[4688]: I0930 19:01:22.545871 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:01:22 crc kubenswrapper[4688]: I0930 19:01:22.546164 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:01:52 crc kubenswrapper[4688]: I0930 19:01:52.546229 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:01:52 crc kubenswrapper[4688]: I0930 19:01:52.547042 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:01:52 crc kubenswrapper[4688]: I0930 19:01:52.547109 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 19:01:52 crc kubenswrapper[4688]: I0930 19:01:52.548219 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 19:01:52 crc kubenswrapper[4688]: I0930 19:01:52.548322 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" gracePeriod=600 Sep 30 19:01:52 crc kubenswrapper[4688]: E0930 19:01:52.679690 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:01:53 crc kubenswrapper[4688]: I0930 19:01:53.687397 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" exitCode=0 Sep 30 19:01:53 crc kubenswrapper[4688]: I0930 19:01:53.687452 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35"} Sep 30 19:01:53 crc kubenswrapper[4688]: I0930 19:01:53.687768 4688 scope.go:117] "RemoveContainer" containerID="61972f67c084cf8890fe543a5bc7f115c88b899433a5ca21104e9c9ba1c273a5" Sep 30 19:01:53 crc kubenswrapper[4688]: I0930 19:01:53.688524 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:01:53 crc kubenswrapper[4688]: E0930 19:01:53.688891 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:02:06 crc kubenswrapper[4688]: I0930 19:02:06.430488 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:02:06 crc kubenswrapper[4688]: E0930 19:02:06.432989 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:02:17 crc kubenswrapper[4688]: I0930 19:02:17.444276 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:02:17 crc kubenswrapper[4688]: E0930 19:02:17.445525 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.254810 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:23 crc kubenswrapper[4688]: E0930 19:02:23.255939 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce838482-0a4d-4150-8704-7f041de6c823" containerName="keystone-cron" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.255954 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce838482-0a4d-4150-8704-7f041de6c823" containerName="keystone-cron" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.256371 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce838482-0a4d-4150-8704-7f041de6c823" containerName="keystone-cron" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.257735 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.273927 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.454369 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.454660 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.454962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtzq4\" (UniqueName: \"kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.557183 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtzq4\" (UniqueName: \"kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.558768 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.558905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.559136 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.559670 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.593668 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtzq4\" (UniqueName: \"kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4\") pod \"community-operators-mtp4q\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:23 crc kubenswrapper[4688]: I0930 19:02:23.624775 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:24 crc kubenswrapper[4688]: I0930 19:02:24.115030 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:25 crc kubenswrapper[4688]: I0930 19:02:25.049620 4688 generic.go:334] "Generic (PLEG): container finished" podID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerID="25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee" exitCode=0 Sep 30 19:02:25 crc kubenswrapper[4688]: I0930 19:02:25.049695 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerDied","Data":"25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee"} Sep 30 19:02:25 crc kubenswrapper[4688]: I0930 19:02:25.049738 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerStarted","Data":"3f4de954adf147316ba52c0ad0acfae3859e8c57877ee87273c2b4571eeb0fe5"} Sep 30 19:02:25 crc kubenswrapper[4688]: I0930 19:02:25.052526 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 19:02:27 crc kubenswrapper[4688]: I0930 19:02:27.070251 4688 generic.go:334] "Generic (PLEG): container finished" podID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerID="19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39" exitCode=0 Sep 30 19:02:27 crc kubenswrapper[4688]: I0930 19:02:27.070365 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerDied","Data":"19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39"} Sep 30 19:02:28 crc kubenswrapper[4688]: I0930 19:02:28.082457 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerStarted","Data":"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c"} Sep 30 19:02:28 crc kubenswrapper[4688]: I0930 19:02:28.137463 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mtp4q" podStartSLOduration=2.675603791 podStartE2EDuration="5.137440857s" podCreationTimestamp="2025-09-30 19:02:23 +0000 UTC" firstStartedPulling="2025-09-30 19:02:25.052139462 +0000 UTC m=+6402.347577060" lastFinishedPulling="2025-09-30 19:02:27.513976558 +0000 UTC m=+6404.809414126" observedRunningTime="2025-09-30 19:02:28.112659196 +0000 UTC m=+6405.408096774" watchObservedRunningTime="2025-09-30 19:02:28.137440857 +0000 UTC m=+6405.432878425" Sep 30 19:02:30 crc kubenswrapper[4688]: I0930 19:02:30.429500 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:02:30 crc kubenswrapper[4688]: E0930 19:02:30.430070 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:02:33 crc kubenswrapper[4688]: I0930 19:02:33.625473 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:33 crc kubenswrapper[4688]: I0930 19:02:33.626180 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:33 crc kubenswrapper[4688]: I0930 19:02:33.673545 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:34 crc kubenswrapper[4688]: I0930 19:02:34.251092 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:34 crc kubenswrapper[4688]: I0930 19:02:34.316225 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.185838 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mtp4q" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="registry-server" containerID="cri-o://a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c" gracePeriod=2 Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.648029 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.831552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content\") pod \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.832057 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities\") pod \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.832381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtzq4\" (UniqueName: \"kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4\") pod \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\" (UID: \"adc9f44a-1d73-4cea-9009-d2c3a3a37072\") " Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.832724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities" (OuterVolumeSpecName: "utilities") pod "adc9f44a-1d73-4cea-9009-d2c3a3a37072" (UID: "adc9f44a-1d73-4cea-9009-d2c3a3a37072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.833468 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.848982 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4" (OuterVolumeSpecName: "kube-api-access-xtzq4") pod "adc9f44a-1d73-4cea-9009-d2c3a3a37072" (UID: "adc9f44a-1d73-4cea-9009-d2c3a3a37072"). InnerVolumeSpecName "kube-api-access-xtzq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:02:36 crc kubenswrapper[4688]: I0930 19:02:36.935560 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtzq4\" (UniqueName: \"kubernetes.io/projected/adc9f44a-1d73-4cea-9009-d2c3a3a37072-kube-api-access-xtzq4\") on node \"crc\" DevicePath \"\"" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.198095 4688 generic.go:334] "Generic (PLEG): container finished" podID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerID="a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c" exitCode=0 Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.198136 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerDied","Data":"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c"} Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.198171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mtp4q" event={"ID":"adc9f44a-1d73-4cea-9009-d2c3a3a37072","Type":"ContainerDied","Data":"3f4de954adf147316ba52c0ad0acfae3859e8c57877ee87273c2b4571eeb0fe5"} Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.198171 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mtp4q" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.198212 4688 scope.go:117] "RemoveContainer" containerID="a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.219467 4688 scope.go:117] "RemoveContainer" containerID="19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.249968 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adc9f44a-1d73-4cea-9009-d2c3a3a37072" (UID: "adc9f44a-1d73-4cea-9009-d2c3a3a37072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.264354 4688 scope.go:117] "RemoveContainer" containerID="25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.282820 4688 scope.go:117] "RemoveContainer" containerID="a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c" Sep 30 19:02:37 crc kubenswrapper[4688]: E0930 19:02:37.283356 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c\": container with ID starting with a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c not found: ID does not exist" containerID="a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.283442 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c"} err="failed to get container status \"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c\": rpc error: code = NotFound desc = could not find container \"a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c\": container with ID starting with a5bbaf67277959f94f831cd09d059d4cd75adc44eb054d8b6a8ba773239a691c not found: ID does not exist" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.283521 4688 scope.go:117] "RemoveContainer" containerID="19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39" Sep 30 19:02:37 crc kubenswrapper[4688]: E0930 19:02:37.284219 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39\": container with ID starting with 19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39 not found: ID does not exist" containerID="19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.284337 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39"} err="failed to get container status \"19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39\": rpc error: code = NotFound desc = could not find container \"19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39\": container with ID starting with 19b7334798c439b8a550d0f5c1cefbe1bff3a12a9942c70a5ebe77f52022fd39 not found: ID does not exist" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.284451 4688 scope.go:117] "RemoveContainer" containerID="25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee" Sep 30 19:02:37 crc kubenswrapper[4688]: E0930 19:02:37.284890 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee\": container with ID starting with 25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee not found: ID does not exist" containerID="25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.284914 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee"} err="failed to get container status \"25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee\": rpc error: code = NotFound desc = could not find container \"25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee\": container with ID starting with 25b9e0e95b171a41b5bdb9ad3b4a4f2a1c24134e815df8e617b023f0893478ee not found: ID does not exist" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.349277 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adc9f44a-1d73-4cea-9009-d2c3a3a37072-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.524967 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:37 crc kubenswrapper[4688]: I0930 19:02:37.530677 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mtp4q"] Sep 30 19:02:39 crc kubenswrapper[4688]: I0930 19:02:39.441626 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" path="/var/lib/kubelet/pods/adc9f44a-1d73-4cea-9009-d2c3a3a37072/volumes" Sep 30 19:02:44 crc kubenswrapper[4688]: I0930 19:02:44.430131 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:02:44 crc kubenswrapper[4688]: E0930 19:02:44.430853 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:02:57 crc kubenswrapper[4688]: I0930 19:02:57.429643 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:02:57 crc kubenswrapper[4688]: E0930 19:02:57.430618 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:03:05 crc kubenswrapper[4688]: I0930 19:03:05.496744 4688 generic.go:334] "Generic (PLEG): container finished" podID="657ce608-1270-43be-98bd-4f1705b45916" containerID="6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1" exitCode=0 Sep 30 19:03:05 crc kubenswrapper[4688]: I0930 19:03:05.496848 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd5qb/must-gather-46dzk" event={"ID":"657ce608-1270-43be-98bd-4f1705b45916","Type":"ContainerDied","Data":"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1"} Sep 30 19:03:05 crc kubenswrapper[4688]: I0930 19:03:05.498144 4688 scope.go:117] "RemoveContainer" containerID="6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1" Sep 30 19:03:06 crc kubenswrapper[4688]: I0930 19:03:06.152753 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5qb_must-gather-46dzk_657ce608-1270-43be-98bd-4f1705b45916/gather/0.log" Sep 30 19:03:12 crc kubenswrapper[4688]: I0930 19:03:12.430516 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:03:12 crc kubenswrapper[4688]: E0930 19:03:12.431734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:03:14 crc kubenswrapper[4688]: I0930 19:03:14.627018 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd5qb/must-gather-46dzk"] Sep 30 19:03:14 crc kubenswrapper[4688]: I0930 19:03:14.627521 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bd5qb/must-gather-46dzk" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="copy" containerID="cri-o://ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111" gracePeriod=2 Sep 30 19:03:14 crc kubenswrapper[4688]: I0930 19:03:14.634504 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd5qb/must-gather-46dzk"] Sep 30 19:03:14 crc kubenswrapper[4688]: I0930 19:03:14.999623 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5qb_must-gather-46dzk_657ce608-1270-43be-98bd-4f1705b45916/copy/0.log" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.000299 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.193751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output\") pod \"657ce608-1270-43be-98bd-4f1705b45916\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.193962 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztkjq\" (UniqueName: \"kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq\") pod \"657ce608-1270-43be-98bd-4f1705b45916\" (UID: \"657ce608-1270-43be-98bd-4f1705b45916\") " Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.201776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq" (OuterVolumeSpecName: "kube-api-access-ztkjq") pod "657ce608-1270-43be-98bd-4f1705b45916" (UID: "657ce608-1270-43be-98bd-4f1705b45916"). InnerVolumeSpecName "kube-api-access-ztkjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.296213 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztkjq\" (UniqueName: \"kubernetes.io/projected/657ce608-1270-43be-98bd-4f1705b45916-kube-api-access-ztkjq\") on node \"crc\" DevicePath \"\"" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.385262 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "657ce608-1270-43be-98bd-4f1705b45916" (UID: "657ce608-1270-43be-98bd-4f1705b45916"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.397638 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/657ce608-1270-43be-98bd-4f1705b45916-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.439240 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657ce608-1270-43be-98bd-4f1705b45916" path="/var/lib/kubelet/pods/657ce608-1270-43be-98bd-4f1705b45916/volumes" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.610434 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd5qb_must-gather-46dzk_657ce608-1270-43be-98bd-4f1705b45916/copy/0.log" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.611058 4688 generic.go:334] "Generic (PLEG): container finished" podID="657ce608-1270-43be-98bd-4f1705b45916" containerID="ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111" exitCode=143 Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.611130 4688 scope.go:117] "RemoveContainer" containerID="ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.611482 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd5qb/must-gather-46dzk" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.649711 4688 scope.go:117] "RemoveContainer" containerID="6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.720818 4688 scope.go:117] "RemoveContainer" containerID="ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111" Sep 30 19:03:15 crc kubenswrapper[4688]: E0930 19:03:15.721301 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111\": container with ID starting with ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111 not found: ID does not exist" containerID="ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.721330 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111"} err="failed to get container status \"ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111\": rpc error: code = NotFound desc = could not find container \"ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111\": container with ID starting with ae16b9109bfbde516a95d476e6757168094ac64adb38bfb793ccf29cd1e85111 not found: ID does not exist" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.721349 4688 scope.go:117] "RemoveContainer" containerID="6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1" Sep 30 19:03:15 crc kubenswrapper[4688]: E0930 19:03:15.721561 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1\": container with ID starting with 6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1 not found: ID does not exist" containerID="6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1" Sep 30 19:03:15 crc kubenswrapper[4688]: I0930 19:03:15.721616 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1"} err="failed to get container status \"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1\": rpc error: code = NotFound desc = could not find container \"6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1\": container with ID starting with 6c65d752ff3d9b6a95b0d3b33cdd84fc99eacb2526c6dde34e9b14d91b4c9ad1 not found: ID does not exist" Sep 30 19:03:25 crc kubenswrapper[4688]: I0930 19:03:25.429554 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:03:25 crc kubenswrapper[4688]: E0930 19:03:25.430345 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:03:40 crc kubenswrapper[4688]: I0930 19:03:40.428999 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:03:40 crc kubenswrapper[4688]: E0930 19:03:40.429699 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.280276 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tf84h/must-gather-mmssn"] Sep 30 19:03:49 crc kubenswrapper[4688]: E0930 19:03:49.282258 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="registry-server" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.282371 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="registry-server" Sep 30 19:03:49 crc kubenswrapper[4688]: E0930 19:03:49.282764 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="extract-utilities" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.282844 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="extract-utilities" Sep 30 19:03:49 crc kubenswrapper[4688]: E0930 19:03:49.282910 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="gather" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.282964 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="gather" Sep 30 19:03:49 crc kubenswrapper[4688]: E0930 19:03:49.283027 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="copy" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.283092 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="copy" Sep 30 19:03:49 crc kubenswrapper[4688]: E0930 19:03:49.283205 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="extract-content" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.283284 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="extract-content" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.283833 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="gather" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.285546 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc9f44a-1d73-4cea-9009-d2c3a3a37072" containerName="registry-server" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.285705 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="657ce608-1270-43be-98bd-4f1705b45916" containerName="copy" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.287229 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.292944 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tf84h"/"default-dockercfg-c694m" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.293271 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tf84h"/"openshift-service-ca.crt" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.295930 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tf84h"/"kube-root-ca.crt" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.303061 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tf84h/must-gather-mmssn"] Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.435873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.435973 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z65\" (UniqueName: \"kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.537431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.537518 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z65\" (UniqueName: \"kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.537933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.561794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z65\" (UniqueName: \"kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65\") pod \"must-gather-mmssn\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:49 crc kubenswrapper[4688]: I0930 19:03:49.608156 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:03:50 crc kubenswrapper[4688]: I0930 19:03:50.103315 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tf84h/must-gather-mmssn"] Sep 30 19:03:50 crc kubenswrapper[4688]: I0930 19:03:50.950197 4688 scope.go:117] "RemoveContainer" containerID="f5d485dbac0a7fa45ff0cdcaf21361a4fde186b2c180308c370fe77ef3440355" Sep 30 19:03:51 crc kubenswrapper[4688]: I0930 19:03:51.002296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/must-gather-mmssn" event={"ID":"e7daf765-a0e3-45c8-9be0-81ca0b58bdee","Type":"ContainerStarted","Data":"a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9"} Sep 30 19:03:51 crc kubenswrapper[4688]: I0930 19:03:51.002350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/must-gather-mmssn" event={"ID":"e7daf765-a0e3-45c8-9be0-81ca0b58bdee","Type":"ContainerStarted","Data":"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc"} Sep 30 19:03:51 crc kubenswrapper[4688]: I0930 19:03:51.002365 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/must-gather-mmssn" event={"ID":"e7daf765-a0e3-45c8-9be0-81ca0b58bdee","Type":"ContainerStarted","Data":"966a11ec25a71cb6f5e4e64b357dd2c32471655749725a79b2a385d1a25e5065"} Sep 30 19:03:52 crc kubenswrapper[4688]: E0930 19:03:52.997338 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.192:38016->38.129.56.192:40727: write tcp 38.129.56.192:38016->38.129.56.192:40727: write: connection reset by peer Sep 30 19:03:53 crc kubenswrapper[4688]: E0930 19:03:53.325864 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.192:38112->38.129.56.192:40727: write tcp 38.129.56.192:38112->38.129.56.192:40727: write: broken pipe Sep 30 19:03:53 crc kubenswrapper[4688]: I0930 19:03:53.435413 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:03:53 crc kubenswrapper[4688]: E0930 19:03:53.435951 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:03:53 crc kubenswrapper[4688]: I0930 19:03:53.905881 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tf84h/must-gather-mmssn" podStartSLOduration=4.905862958 podStartE2EDuration="4.905862958s" podCreationTimestamp="2025-09-30 19:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 19:03:51.02096139 +0000 UTC m=+6488.316398948" watchObservedRunningTime="2025-09-30 19:03:53.905862958 +0000 UTC m=+6491.201300516" Sep 30 19:03:53 crc kubenswrapper[4688]: I0930 19:03:53.908753 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tf84h/crc-debug-7frtz"] Sep 30 19:03:53 crc kubenswrapper[4688]: I0930 19:03:53.909811 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.024163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97gj\" (UniqueName: \"kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.024238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.126381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97gj\" (UniqueName: \"kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.126473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.126615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.148198 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97gj\" (UniqueName: \"kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj\") pod \"crc-debug-7frtz\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: I0930 19:03:54.229999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:03:54 crc kubenswrapper[4688]: W0930 19:03:54.258717 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfff748a_db35_4098_8273_56899d94cdd8.slice/crio-2bfa0679b85d040a96265acf1ec5f05ff7e9d833a660ff2fd48f751b8e6ef26e WatchSource:0}: Error finding container 2bfa0679b85d040a96265acf1ec5f05ff7e9d833a660ff2fd48f751b8e6ef26e: Status 404 returned error can't find the container with id 2bfa0679b85d040a96265acf1ec5f05ff7e9d833a660ff2fd48f751b8e6ef26e Sep 30 19:03:55 crc kubenswrapper[4688]: I0930 19:03:55.033459 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-7frtz" event={"ID":"bfff748a-db35-4098-8273-56899d94cdd8","Type":"ContainerStarted","Data":"d3f647437b8de15c2a2f0c451e73abf84178b77fd17c29040f6cac424d11d4d0"} Sep 30 19:03:55 crc kubenswrapper[4688]: I0930 19:03:55.034044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-7frtz" event={"ID":"bfff748a-db35-4098-8273-56899d94cdd8","Type":"ContainerStarted","Data":"2bfa0679b85d040a96265acf1ec5f05ff7e9d833a660ff2fd48f751b8e6ef26e"} Sep 30 19:04:07 crc kubenswrapper[4688]: I0930 19:04:07.431124 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:04:07 crc kubenswrapper[4688]: E0930 19:04:07.432303 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:04:19 crc kubenswrapper[4688]: I0930 19:04:19.430368 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:04:19 crc kubenswrapper[4688]: E0930 19:04:19.431100 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:04:32 crc kubenswrapper[4688]: I0930 19:04:32.429132 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:04:32 crc kubenswrapper[4688]: E0930 19:04:32.429843 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:04:44 crc kubenswrapper[4688]: I0930 19:04:44.430244 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:04:44 crc kubenswrapper[4688]: E0930 19:04:44.430990 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.312992 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tf84h/crc-debug-7frtz" podStartSLOduration=58.31297 podStartE2EDuration="58.31297s" podCreationTimestamp="2025-09-30 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 19:03:55.052902231 +0000 UTC m=+6492.348339789" watchObservedRunningTime="2025-09-30 19:04:51.31297 +0000 UTC m=+6548.608407558" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.315270 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.317428 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.328975 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.415780 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.415866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.415954 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f5ph\" (UniqueName: \"kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.517866 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.518818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.519350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f5ph\" (UniqueName: \"kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.519474 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.520146 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.541470 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f5ph\" (UniqueName: \"kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph\") pod \"redhat-operators-xz7bh\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:51 crc kubenswrapper[4688]: I0930 19:04:51.657747 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:04:52 crc kubenswrapper[4688]: I0930 19:04:52.217493 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:04:52 crc kubenswrapper[4688]: I0930 19:04:52.546499 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cac1c65-f912-409e-bef9-3605ca89110b" containerID="e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715" exitCode=0 Sep 30 19:04:52 crc kubenswrapper[4688]: I0930 19:04:52.546743 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerDied","Data":"e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715"} Sep 30 19:04:52 crc kubenswrapper[4688]: I0930 19:04:52.546821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerStarted","Data":"8c25e620087dd36159931dd849ebc0aa122b310d736f17c3026e2f7369d14746"} Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.429849 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f77664db-t4vhn_4f423e22-c7d3-4385-82a5-dca95aef82a0/barbican-api/0.log" Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.519741 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55f77664db-t4vhn_4f423e22-c7d3-4385-82a5-dca95aef82a0/barbican-api-log/0.log" Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.572731 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerStarted","Data":"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9"} Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.668478 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bdbc5f8bb-9xltz_8f4dccc2-3d97-4492-acb8-a6e080810069/barbican-keystone-listener/0.log" Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.799741 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bdbc5f8bb-9xltz_8f4dccc2-3d97-4492-acb8-a6e080810069/barbican-keystone-listener-log/0.log" Sep 30 19:04:54 crc kubenswrapper[4688]: I0930 19:04:54.855268 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66df74b4fc-5jlzp_69fd9ea3-7c23-450d-b3e9-b5bb57e4f324/barbican-worker/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.092648 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66df74b4fc-5jlzp_69fd9ea3-7c23-450d-b3e9-b5bb57e4f324/barbican-worker-log/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.175189 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-btm2d_3002489d-18d9-4ab0-b09e-ab37e97ec796/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.418867 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/ceilometer-notification-agent/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.430185 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/ceilometer-central-agent/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.512938 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/proxy-httpd/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.583404 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f5f96c12-889c-4c0a-a332-9d58c5fa6526/sg-core/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.768546 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1527bf09-6ef0-4fda-a749-9f6440f17159/cinder-api/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.800115 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1527bf09-6ef0-4fda-a749-9f6440f17159/cinder-api-log/0.log" Sep 30 19:04:55 crc kubenswrapper[4688]: I0930 19:04:55.996680 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8344d27c-987c-4c41-8491-9856f5b93c3f/cinder-scheduler/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.039899 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8344d27c-987c-4c41-8491-9856f5b93c3f/probe/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.194959 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z475s_4b7f1d4c-3151-4220-87e0-84fb8dff5710/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.285137 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f64rr_f8b375f4-25b5-42f4-9b62-763fd31e3404/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.459050 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qcftq_03ed4997-f615-4267-9738-7cd2fd9a0cb9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.588697 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cac1c65-f912-409e-bef9-3605ca89110b" containerID="2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9" exitCode=0 Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.588740 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerDied","Data":"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9"} Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.625054 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/init/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.816528 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/init/0.log" Sep 30 19:04:56 crc kubenswrapper[4688]: I0930 19:04:56.955665 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b757848d5-m4nb9_7fe9be74-65ff-4cd9-8dc4-866b771262bc/dnsmasq-dns/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.019973 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h8grj_7f941024-be27-4d1b-8fa3-3d8f1ff434ba/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.191981 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4/glance-httpd/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.316047 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f7cc8d10-5ef9-4db6-bccc-c739bbd9faa4/glance-log/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.405232 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_59dd37af-4be8-4c5c-900f-efbd6c09fcf7/glance-httpd/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.515303 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_59dd37af-4be8-4c5c-900f-efbd6c09fcf7/glance-log/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.597754 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerStarted","Data":"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f"} Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.620607 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xz7bh" podStartSLOduration=2.170440243 podStartE2EDuration="6.620588108s" podCreationTimestamp="2025-09-30 19:04:51 +0000 UTC" firstStartedPulling="2025-09-30 19:04:52.548101993 +0000 UTC m=+6549.843539551" lastFinishedPulling="2025-09-30 19:04:56.998249858 +0000 UTC m=+6554.293687416" observedRunningTime="2025-09-30 19:04:57.615497946 +0000 UTC m=+6554.910935504" watchObservedRunningTime="2025-09-30 19:04:57.620588108 +0000 UTC m=+6554.916025656" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.653127 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dddb98558-nts96_c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c/horizon/0.log" Sep 30 19:04:57 crc kubenswrapper[4688]: I0930 19:04:57.861010 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-d7wch_faf29e19-4d73-430a-937d-2b2b02ef6d70/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.094087 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-64mcs_a39b16b2-47a0-4014-a6d5-286fb2daa1a2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.392439 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29320921-spfn9_38f66d5e-b38e-480b-8537-befc7512cc9e/keystone-cron/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.429263 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:04:58 crc kubenswrapper[4688]: E0930 19:04:58.429708 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.450815 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5dddb98558-nts96_c5430a6e-5e2c-40e5-8b0f-0b0bb4e5db5c/horizon-log/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.623449 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29320981-tpstq_ce838482-0a4d-4150-8704-7f041de6c823/keystone-cron/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.775455 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7cb94dd69b-49qxn_6b52ebb3-8313-4e9d-9227-bd89bf88639d/keystone-api/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.887025 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2feddb6b-68dd-45da-aea7-ce4061c8b067/kube-state-metrics/0.log" Sep 30 19:04:58 crc kubenswrapper[4688]: I0930 19:04:58.989956 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wghcn_b43c2a9f-e57c-4951-8bd6-ce7741a9c1c8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:59 crc kubenswrapper[4688]: I0930 19:04:59.664594 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-st9qd_a36a505a-e276-4557-b4a5-dab7af0803fd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:04:59 crc kubenswrapper[4688]: I0930 19:04:59.699467 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d8684c999-h2g4c_fcb14b73-4266-4adf-a6cf-649855edb179/neutron-httpd/0.log" Sep 30 19:04:59 crc kubenswrapper[4688]: I0930 19:04:59.887052 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d8684c999-h2g4c_fcb14b73-4266-4adf-a6cf-649855edb179/neutron-api/0.log" Sep 30 19:05:00 crc kubenswrapper[4688]: I0930 19:05:00.904802 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d73b96c1-ad6b-4c56-a8db-4d4e69ff323e/nova-cell0-conductor-conductor/0.log" Sep 30 19:05:01 crc kubenswrapper[4688]: I0930 19:05:01.515813 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_357c4a92-d57a-4c57-9cee-7a98de90edff/nova-cell1-conductor-conductor/0.log" Sep 30 19:05:01 crc kubenswrapper[4688]: I0930 19:05:01.657905 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:01 crc kubenswrapper[4688]: I0930 19:05:01.659261 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:01 crc kubenswrapper[4688]: I0930 19:05:01.840902 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_61afa819-98c6-408a-b62b-9098624f6e38/nova-api-log/0.log" Sep 30 19:05:02 crc kubenswrapper[4688]: I0930 19:05:02.101061 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ba1f86f8-c725-4de3-89f9-4aed0b031859/nova-cell1-novncproxy-novncproxy/0.log" Sep 30 19:05:02 crc kubenswrapper[4688]: I0930 19:05:02.316917 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-62pl5_82afbeb4-8c01-47b4-a07c-b9d1fd405e39/nova-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:02 crc kubenswrapper[4688]: I0930 19:05:02.664128 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_61afa819-98c6-408a-b62b-9098624f6e38/nova-api-api/0.log" Sep 30 19:05:02 crc kubenswrapper[4688]: I0930 19:05:02.711756 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xz7bh" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" probeResult="failure" output=< Sep 30 19:05:02 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 19:05:02 crc kubenswrapper[4688]: > Sep 30 19:05:02 crc kubenswrapper[4688]: I0930 19:05:02.979504 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d423914e-2c66-43fb-984f-8936b0c1e5af/nova-metadata-log/0.log" Sep 30 19:05:03 crc kubenswrapper[4688]: I0930 19:05:03.413340 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/mysql-bootstrap/0.log" Sep 30 19:05:03 crc kubenswrapper[4688]: I0930 19:05:03.565324 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4967d3fb-572e-4d96-85f2-7f3ef31acc3b/nova-scheduler-scheduler/0.log" Sep 30 19:05:03 crc kubenswrapper[4688]: I0930 19:05:03.635814 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/mysql-bootstrap/0.log" Sep 30 19:05:03 crc kubenswrapper[4688]: I0930 19:05:03.794178 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_47da376c-d470-460a-90a1-00e696c83821/galera/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.029553 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/mysql-bootstrap/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.201071 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/mysql-bootstrap/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.261892 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_16d86c38-a2f6-4f33-8a3f-d66ff12bdb38/galera/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.424550 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7ec99c41-9858-45f2-84b9-c3fd05deb7fd/openstackclient/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.663455 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s67vw_263d16ad-0943-47a4-99f5-4b56dd223472/openstack-network-exporter/0.log" Sep 30 19:05:04 crc kubenswrapper[4688]: I0930 19:05:04.891012 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server-init/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.108114 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server-init/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.137548 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovs-vswitchd/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.285219 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dsjj5_a4a05b21-e8c9-45f9-bf12-56c74c0672fd/ovsdb-server/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.481322 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-v6d9x_a5a55636-8e3a-49bf-8c31-2980f614fd65/ovn-controller/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.691847 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qjw26_01be6ded-891d-4d74-9af3-2adcc0d9ebb4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:05 crc kubenswrapper[4688]: I0930 19:05:05.906367 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9a63408-631b-4f20-a556-39bb5835e71e/openstack-network-exporter/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.022388 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d9a63408-631b-4f20-a556-39bb5835e71e/ovn-northd/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.206987 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de/openstack-network-exporter/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.297149 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e0f25b6-fc2a-4e03-b3d4-2a99c7b202de/ovsdbserver-nb/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.304463 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d423914e-2c66-43fb-984f-8936b0c1e5af/nova-metadata-metadata/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.551623 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44203c2b-b94c-41a5-908c-a79aa2790bfc/ovsdbserver-sb/0.log" Sep 30 19:05:06 crc kubenswrapper[4688]: I0930 19:05:06.565390 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_44203c2b-b94c-41a5-908c-a79aa2790bfc/openstack-network-exporter/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.136601 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/setup-container/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.140238 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67b5cc9656-9lxh4_27f236ad-c05c-4f2e-b68d-d6b867c456f8/placement-api/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.253822 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-67b5cc9656-9lxh4_27f236ad-c05c-4f2e-b68d-d6b867c456f8/placement-log/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.321663 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/setup-container/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.350848 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_36c7ea93-ad69-4dda-abfe-2679e70870d8/rabbitmq/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.535365 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/setup-container/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.771069 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/setup-container/0.log" Sep 30 19:05:07 crc kubenswrapper[4688]: I0930 19:05:07.805640 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_eb8b2810-7037-4a14-8a57-b65f8d9f8af7/rabbitmq/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.122077 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-hxd45_b95ac163-c6b0-4fa0-a04c-5d4e0389c238/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.157158 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zqw69_f33c7eae-02eb-440e-ac3a-f16f098efd47/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.316170 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2wblc_e67f930b-db0b-4d29-b9e5-5a259d6a9534/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.547007 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-66ftv_857e37c3-03d7-434d-80d4-6ee03574499e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.677864 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-lb96p_386f214d-f18b-475f-828b-9fb63a356714/ssh-known-hosts-edpm-deployment/0.log" Sep 30 19:05:08 crc kubenswrapper[4688]: I0930 19:05:08.887908 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-647757d8c5-6pzkp_62277faa-c7c7-4151-b35f-f3ebe78d3ffe/proxy-server/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.060129 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9rwlk_d8f8de37-9747-4c9d-b61b-f89ad32ba2fd/swift-ring-rebalance/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.162433 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-647757d8c5-6pzkp_62277faa-c7c7-4151-b35f-f3ebe78d3ffe/proxy-httpd/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.255643 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-auditor/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.385368 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-reaper/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.518726 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-server/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.571100 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/account-replicator/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.600221 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-auditor/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.775530 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-replicator/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.789989 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-server/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.867822 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/container-updater/0.log" Sep 30 19:05:09 crc kubenswrapper[4688]: I0930 19:05:09.970930 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-auditor/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.008077 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-expirer/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.086377 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-replicator/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.178470 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-server/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.234374 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/object-updater/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.318259 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/rsync/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.417993 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdd2efbd-01c7-43ce-8521-e4b7de27bf28/swift-recon-cron/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.428921 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:05:10 crc kubenswrapper[4688]: E0930 19:05:10.429246 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.559104 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nzr9p_ffd57ee0-1383-4acb-8892-b5d00ae71ea4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:10 crc kubenswrapper[4688]: I0930 19:05:10.709515 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-tlzxq_4bb29aec-2e71-4505-b1d0-e79b6aaecceb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 19:05:12 crc kubenswrapper[4688]: I0930 19:05:12.715635 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xz7bh" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" probeResult="failure" output=< Sep 30 19:05:12 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 19:05:12 crc kubenswrapper[4688]: > Sep 30 19:05:16 crc kubenswrapper[4688]: I0930 19:05:16.781885 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1988f233-6da1-4c36-acbc-33f7dbaab1b0/memcached/0.log" Sep 30 19:05:22 crc kubenswrapper[4688]: I0930 19:05:22.704214 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xz7bh" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" probeResult="failure" output=< Sep 30 19:05:22 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Sep 30 19:05:22 crc kubenswrapper[4688]: > Sep 30 19:05:25 crc kubenswrapper[4688]: I0930 19:05:25.429528 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:05:25 crc kubenswrapper[4688]: E0930 19:05:25.430321 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.341611 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.344289 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.359077 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.469868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76mh\" (UniqueName: \"kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.470057 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.470093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.574784 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76mh\" (UniqueName: \"kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.575016 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.575047 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.575873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.576195 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.612896 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76mh\" (UniqueName: \"kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh\") pod \"certified-operators-rm6zw\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:30 crc kubenswrapper[4688]: I0930 19:05:30.662969 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.165655 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.709174 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.782636 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.903470 4688 generic.go:334] "Generic (PLEG): container finished" podID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerID="af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed" exitCode=0 Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.903517 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerDied","Data":"af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed"} Sep 30 19:05:31 crc kubenswrapper[4688]: I0930 19:05:31.903640 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerStarted","Data":"102a78f75890ff1577928fcc448c9dfabe1b6a45d4df2b748731f81ad8b2de5d"} Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.318564 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.319133 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xz7bh" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" containerID="cri-o://cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f" gracePeriod=2 Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.773314 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.923426 4688 generic.go:334] "Generic (PLEG): container finished" podID="6cac1c65-f912-409e-bef9-3605ca89110b" containerID="cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f" exitCode=0 Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.923491 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xz7bh" Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.923507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerDied","Data":"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f"} Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.923537 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xz7bh" event={"ID":"6cac1c65-f912-409e-bef9-3605ca89110b","Type":"ContainerDied","Data":"8c25e620087dd36159931dd849ebc0aa122b310d736f17c3026e2f7369d14746"} Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.923557 4688 scope.go:117] "RemoveContainer" containerID="cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f" Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.928030 4688 generic.go:334] "Generic (PLEG): container finished" podID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerID="85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6" exitCode=0 Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.928068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerDied","Data":"85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6"} Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.950963 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content\") pod \"6cac1c65-f912-409e-bef9-3605ca89110b\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.951186 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities\") pod \"6cac1c65-f912-409e-bef9-3605ca89110b\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.951389 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f5ph\" (UniqueName: \"kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph\") pod \"6cac1c65-f912-409e-bef9-3605ca89110b\" (UID: \"6cac1c65-f912-409e-bef9-3605ca89110b\") " Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.955350 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities" (OuterVolumeSpecName: "utilities") pod "6cac1c65-f912-409e-bef9-3605ca89110b" (UID: "6cac1c65-f912-409e-bef9-3605ca89110b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.958640 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph" (OuterVolumeSpecName: "kube-api-access-7f5ph") pod "6cac1c65-f912-409e-bef9-3605ca89110b" (UID: "6cac1c65-f912-409e-bef9-3605ca89110b"). InnerVolumeSpecName "kube-api-access-7f5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:05:33 crc kubenswrapper[4688]: I0930 19:05:33.960735 4688 scope.go:117] "RemoveContainer" containerID="2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.034251 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cac1c65-f912-409e-bef9-3605ca89110b" (UID: "6cac1c65-f912-409e-bef9-3605ca89110b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.035102 4688 scope.go:117] "RemoveContainer" containerID="e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.056265 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.056499 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cac1c65-f912-409e-bef9-3605ca89110b-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.056516 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f5ph\" (UniqueName: \"kubernetes.io/projected/6cac1c65-f912-409e-bef9-3605ca89110b-kube-api-access-7f5ph\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.078944 4688 scope.go:117] "RemoveContainer" containerID="cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f" Sep 30 19:05:34 crc kubenswrapper[4688]: E0930 19:05:34.079427 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f\": container with ID starting with cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f not found: ID does not exist" containerID="cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.079460 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f"} err="failed to get container status \"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f\": rpc error: code = NotFound desc = could not find container \"cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f\": container with ID starting with cc49dcd58fa3f66d1194bde4fcfd57cae4f2b5ba4ac36068b1fb3b559526d86f not found: ID does not exist" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.079481 4688 scope.go:117] "RemoveContainer" containerID="2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9" Sep 30 19:05:34 crc kubenswrapper[4688]: E0930 19:05:34.079777 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9\": container with ID starting with 2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9 not found: ID does not exist" containerID="2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.079795 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9"} err="failed to get container status \"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9\": rpc error: code = NotFound desc = could not find container \"2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9\": container with ID starting with 2064166ee7d292a7d316464ed15dddff5a577d8709c1b0e341244a9cc0182be9 not found: ID does not exist" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.079809 4688 scope.go:117] "RemoveContainer" containerID="e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715" Sep 30 19:05:34 crc kubenswrapper[4688]: E0930 19:05:34.080057 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715\": container with ID starting with e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715 not found: ID does not exist" containerID="e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.080077 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715"} err="failed to get container status \"e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715\": rpc error: code = NotFound desc = could not find container \"e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715\": container with ID starting with e48a603214d24f0e688a43a843f0c7132626fb52faa4504e7f0e01ce78a5c715 not found: ID does not exist" Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.257616 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.265957 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xz7bh"] Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.944342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerStarted","Data":"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59"} Sep 30 19:05:34 crc kubenswrapper[4688]: I0930 19:05:34.984676 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rm6zw" podStartSLOduration=2.540040207 podStartE2EDuration="4.984650318s" podCreationTimestamp="2025-09-30 19:05:30 +0000 UTC" firstStartedPulling="2025-09-30 19:05:31.905205694 +0000 UTC m=+6589.200643252" lastFinishedPulling="2025-09-30 19:05:34.349815785 +0000 UTC m=+6591.645253363" observedRunningTime="2025-09-30 19:05:34.976950289 +0000 UTC m=+6592.272387847" watchObservedRunningTime="2025-09-30 19:05:34.984650318 +0000 UTC m=+6592.280087896" Sep 30 19:05:35 crc kubenswrapper[4688]: I0930 19:05:35.443745 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" path="/var/lib/kubelet/pods/6cac1c65-f912-409e-bef9-3605ca89110b/volumes" Sep 30 19:05:38 crc kubenswrapper[4688]: I0930 19:05:38.429837 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:05:38 crc kubenswrapper[4688]: E0930 19:05:38.430384 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:05:40 crc kubenswrapper[4688]: I0930 19:05:40.664078 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:40 crc kubenswrapper[4688]: I0930 19:05:40.665618 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:40 crc kubenswrapper[4688]: I0930 19:05:40.717560 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:41 crc kubenswrapper[4688]: I0930 19:05:41.072240 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:41 crc kubenswrapper[4688]: I0930 19:05:41.125090 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.031977 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rm6zw" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="registry-server" containerID="cri-o://08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59" gracePeriod=2 Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.523435 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.637735 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content\") pod \"8c14146b-5be6-4fd0-92c5-a6ea78742281\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.637779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities\") pod \"8c14146b-5be6-4fd0-92c5-a6ea78742281\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.637835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76mh\" (UniqueName: \"kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh\") pod \"8c14146b-5be6-4fd0-92c5-a6ea78742281\" (UID: \"8c14146b-5be6-4fd0-92c5-a6ea78742281\") " Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.639303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities" (OuterVolumeSpecName: "utilities") pod "8c14146b-5be6-4fd0-92c5-a6ea78742281" (UID: "8c14146b-5be6-4fd0-92c5-a6ea78742281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.643350 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh" (OuterVolumeSpecName: "kube-api-access-v76mh") pod "8c14146b-5be6-4fd0-92c5-a6ea78742281" (UID: "8c14146b-5be6-4fd0-92c5-a6ea78742281"). InnerVolumeSpecName "kube-api-access-v76mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.720627 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c14146b-5be6-4fd0-92c5-a6ea78742281" (UID: "8c14146b-5be6-4fd0-92c5-a6ea78742281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.739665 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.739714 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c14146b-5be6-4fd0-92c5-a6ea78742281-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:43 crc kubenswrapper[4688]: I0930 19:05:43.739728 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76mh\" (UniqueName: \"kubernetes.io/projected/8c14146b-5be6-4fd0-92c5-a6ea78742281-kube-api-access-v76mh\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.040781 4688 generic.go:334] "Generic (PLEG): container finished" podID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerID="08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59" exitCode=0 Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.040839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerDied","Data":"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59"} Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.040892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm6zw" event={"ID":"8c14146b-5be6-4fd0-92c5-a6ea78742281","Type":"ContainerDied","Data":"102a78f75890ff1577928fcc448c9dfabe1b6a45d4df2b748731f81ad8b2de5d"} Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.040897 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm6zw" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.040915 4688 scope.go:117] "RemoveContainer" containerID="08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.073014 4688 scope.go:117] "RemoveContainer" containerID="85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.085773 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.098049 4688 scope.go:117] "RemoveContainer" containerID="af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.100489 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rm6zw"] Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.149657 4688 scope.go:117] "RemoveContainer" containerID="08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59" Sep 30 19:05:44 crc kubenswrapper[4688]: E0930 19:05:44.150006 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59\": container with ID starting with 08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59 not found: ID does not exist" containerID="08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.150045 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59"} err="failed to get container status \"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59\": rpc error: code = NotFound desc = could not find container \"08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59\": container with ID starting with 08d5a99147a4b5fee830be6509bf23ce8b9cad4b53b71644f6fa33d9560adc59 not found: ID does not exist" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.150070 4688 scope.go:117] "RemoveContainer" containerID="85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6" Sep 30 19:05:44 crc kubenswrapper[4688]: E0930 19:05:44.150385 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6\": container with ID starting with 85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6 not found: ID does not exist" containerID="85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.150410 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6"} err="failed to get container status \"85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6\": rpc error: code = NotFound desc = could not find container \"85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6\": container with ID starting with 85185bb858742b338aff6ae3b8eff191db39bdd7e05d4d581a56e8e375ac94a6 not found: ID does not exist" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.150424 4688 scope.go:117] "RemoveContainer" containerID="af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed" Sep 30 19:05:44 crc kubenswrapper[4688]: E0930 19:05:44.150682 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed\": container with ID starting with af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed not found: ID does not exist" containerID="af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed" Sep 30 19:05:44 crc kubenswrapper[4688]: I0930 19:05:44.150713 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed"} err="failed to get container status \"af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed\": rpc error: code = NotFound desc = could not find container \"af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed\": container with ID starting with af00ca63b059eafa5309142fb366ecbe14805c6eecf2949089260df9752812ed not found: ID does not exist" Sep 30 19:05:45 crc kubenswrapper[4688]: I0930 19:05:45.445630 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" path="/var/lib/kubelet/pods/8c14146b-5be6-4fd0-92c5-a6ea78742281/volumes" Sep 30 19:05:50 crc kubenswrapper[4688]: I0930 19:05:50.430221 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:05:50 crc kubenswrapper[4688]: E0930 19:05:50.431549 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:05:51 crc kubenswrapper[4688]: I0930 19:05:51.032383 4688 scope.go:117] "RemoveContainer" containerID="0e5a39d53f1c3cf4b205a79b0221080351e51afd6d3d919b9ae47967f54ae9f9" Sep 30 19:05:53 crc kubenswrapper[4688]: I0930 19:05:53.140471 4688 generic.go:334] "Generic (PLEG): container finished" podID="bfff748a-db35-4098-8273-56899d94cdd8" containerID="d3f647437b8de15c2a2f0c451e73abf84178b77fd17c29040f6cac424d11d4d0" exitCode=0 Sep 30 19:05:53 crc kubenswrapper[4688]: I0930 19:05:53.140541 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-7frtz" event={"ID":"bfff748a-db35-4098-8273-56899d94cdd8","Type":"ContainerDied","Data":"d3f647437b8de15c2a2f0c451e73abf84178b77fd17c29040f6cac424d11d4d0"} Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.245698 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.291285 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-7frtz"] Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.299508 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-7frtz"] Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.385311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host\") pod \"bfff748a-db35-4098-8273-56899d94cdd8\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.385396 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r97gj\" (UniqueName: \"kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj\") pod \"bfff748a-db35-4098-8273-56899d94cdd8\" (UID: \"bfff748a-db35-4098-8273-56899d94cdd8\") " Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.385421 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host" (OuterVolumeSpecName: "host") pod "bfff748a-db35-4098-8273-56899d94cdd8" (UID: "bfff748a-db35-4098-8273-56899d94cdd8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.386313 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfff748a-db35-4098-8273-56899d94cdd8-host\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.393327 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj" (OuterVolumeSpecName: "kube-api-access-r97gj") pod "bfff748a-db35-4098-8273-56899d94cdd8" (UID: "bfff748a-db35-4098-8273-56899d94cdd8"). InnerVolumeSpecName "kube-api-access-r97gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:05:54 crc kubenswrapper[4688]: I0930 19:05:54.489256 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r97gj\" (UniqueName: \"kubernetes.io/projected/bfff748a-db35-4098-8273-56899d94cdd8-kube-api-access-r97gj\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.162939 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfa0679b85d040a96265acf1ec5f05ff7e9d833a660ff2fd48f751b8e6ef26e" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.163011 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-7frtz" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.441117 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfff748a-db35-4098-8273-56899d94cdd8" path="/var/lib/kubelet/pods/bfff748a-db35-4098-8273-56899d94cdd8/volumes" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449359 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tf84h/crc-debug-nj4jh"] Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449799 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfff748a-db35-4098-8273-56899d94cdd8" containerName="container-00" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449818 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfff748a-db35-4098-8273-56899d94cdd8" containerName="container-00" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449842 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="extract-content" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449850 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="extract-content" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449862 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449869 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449879 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="extract-utilities" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449887 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="extract-utilities" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449904 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="extract-utilities" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449912 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="extract-utilities" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449920 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449928 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: E0930 19:05:55.449955 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="extract-content" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.449964 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="extract-content" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.450199 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c14146b-5be6-4fd0-92c5-a6ea78742281" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.450214 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cac1c65-f912-409e-bef9-3605ca89110b" containerName="registry-server" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.450228 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfff748a-db35-4098-8273-56899d94cdd8" containerName="container-00" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.450952 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.610857 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.610953 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fchtv\" (UniqueName: \"kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.713531 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.713648 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.713763 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fchtv\" (UniqueName: \"kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.732468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fchtv\" (UniqueName: \"kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv\") pod \"crc-debug-nj4jh\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:55 crc kubenswrapper[4688]: I0930 19:05:55.779842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:56 crc kubenswrapper[4688]: I0930 19:05:56.171230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" event={"ID":"4b027985-81dc-494e-9aca-ec7b2e6d1dea","Type":"ContainerStarted","Data":"cc7d2d97641445190d5bca32ca258439c11affbfd3688fe84540e20a6b386aea"} Sep 30 19:05:56 crc kubenswrapper[4688]: I0930 19:05:56.171663 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" event={"ID":"4b027985-81dc-494e-9aca-ec7b2e6d1dea","Type":"ContainerStarted","Data":"0d17fa55ce5ee727158c724451cdc55c761aaf04d47edeb2683701b871e64be3"} Sep 30 19:05:56 crc kubenswrapper[4688]: I0930 19:05:56.186249 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" podStartSLOduration=1.186229708 podStartE2EDuration="1.186229708s" podCreationTimestamp="2025-09-30 19:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 19:05:56.184238717 +0000 UTC m=+6613.479676285" watchObservedRunningTime="2025-09-30 19:05:56.186229708 +0000 UTC m=+6613.481667266" Sep 30 19:05:57 crc kubenswrapper[4688]: I0930 19:05:57.179260 4688 generic.go:334] "Generic (PLEG): container finished" podID="4b027985-81dc-494e-9aca-ec7b2e6d1dea" containerID="cc7d2d97641445190d5bca32ca258439c11affbfd3688fe84540e20a6b386aea" exitCode=0 Sep 30 19:05:57 crc kubenswrapper[4688]: I0930 19:05:57.179308 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" event={"ID":"4b027985-81dc-494e-9aca-ec7b2e6d1dea","Type":"ContainerDied","Data":"cc7d2d97641445190d5bca32ca258439c11affbfd3688fe84540e20a6b386aea"} Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.270375 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.454618 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host\") pod \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.454724 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host" (OuterVolumeSpecName: "host") pod "4b027985-81dc-494e-9aca-ec7b2e6d1dea" (UID: "4b027985-81dc-494e-9aca-ec7b2e6d1dea"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.455280 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fchtv\" (UniqueName: \"kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv\") pod \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\" (UID: \"4b027985-81dc-494e-9aca-ec7b2e6d1dea\") " Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.457171 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b027985-81dc-494e-9aca-ec7b2e6d1dea-host\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.478864 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv" (OuterVolumeSpecName: "kube-api-access-fchtv") pod "4b027985-81dc-494e-9aca-ec7b2e6d1dea" (UID: "4b027985-81dc-494e-9aca-ec7b2e6d1dea"). InnerVolumeSpecName "kube-api-access-fchtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:05:58 crc kubenswrapper[4688]: I0930 19:05:58.557881 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fchtv\" (UniqueName: \"kubernetes.io/projected/4b027985-81dc-494e-9aca-ec7b2e6d1dea-kube-api-access-fchtv\") on node \"crc\" DevicePath \"\"" Sep 30 19:05:59 crc kubenswrapper[4688]: I0930 19:05:59.199438 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" event={"ID":"4b027985-81dc-494e-9aca-ec7b2e6d1dea","Type":"ContainerDied","Data":"0d17fa55ce5ee727158c724451cdc55c761aaf04d47edeb2683701b871e64be3"} Sep 30 19:05:59 crc kubenswrapper[4688]: I0930 19:05:59.199701 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d17fa55ce5ee727158c724451cdc55c761aaf04d47edeb2683701b871e64be3" Sep 30 19:05:59 crc kubenswrapper[4688]: I0930 19:05:59.199497 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-nj4jh" Sep 30 19:06:01 crc kubenswrapper[4688]: I0930 19:06:01.428877 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:06:01 crc kubenswrapper[4688]: E0930 19:06:01.429394 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:06:05 crc kubenswrapper[4688]: I0930 19:06:05.929260 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-nj4jh"] Sep 30 19:06:05 crc kubenswrapper[4688]: I0930 19:06:05.941958 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-nj4jh"] Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.170255 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tf84h/crc-debug-hkrhv"] Sep 30 19:06:07 crc kubenswrapper[4688]: E0930 19:06:07.171032 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b027985-81dc-494e-9aca-ec7b2e6d1dea" containerName="container-00" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.171048 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b027985-81dc-494e-9aca-ec7b2e6d1dea" containerName="container-00" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.171301 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b027985-81dc-494e-9aca-ec7b2e6d1dea" containerName="container-00" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.172456 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.289790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.289957 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgdl\" (UniqueName: \"kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.392147 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.392302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.392449 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgdl\" (UniqueName: \"kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.419903 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgdl\" (UniqueName: \"kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl\") pod \"crc-debug-hkrhv\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.443842 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b027985-81dc-494e-9aca-ec7b2e6d1dea" path="/var/lib/kubelet/pods/4b027985-81dc-494e-9aca-ec7b2e6d1dea/volumes" Sep 30 19:06:07 crc kubenswrapper[4688]: I0930 19:06:07.492144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:08 crc kubenswrapper[4688]: I0930 19:06:08.289454 4688 generic.go:334] "Generic (PLEG): container finished" podID="b8ff4f9c-ed9f-4177-9efe-72451eee9693" containerID="acbab93d6e258f4585cd20acf42a347b97e56c818652a45ea829a243787ea4ae" exitCode=0 Sep 30 19:06:08 crc kubenswrapper[4688]: I0930 19:06:08.289624 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" event={"ID":"b8ff4f9c-ed9f-4177-9efe-72451eee9693","Type":"ContainerDied","Data":"acbab93d6e258f4585cd20acf42a347b97e56c818652a45ea829a243787ea4ae"} Sep 30 19:06:08 crc kubenswrapper[4688]: I0930 19:06:08.289922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" event={"ID":"b8ff4f9c-ed9f-4177-9efe-72451eee9693","Type":"ContainerStarted","Data":"383a73fc033f26a00a5d67cc45312c139b80d5d4166676bb7eaac8f955039072"} Sep 30 19:06:08 crc kubenswrapper[4688]: I0930 19:06:08.349591 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-hkrhv"] Sep 30 19:06:08 crc kubenswrapper[4688]: I0930 19:06:08.359157 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tf84h/crc-debug-hkrhv"] Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.420145 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.458365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host\") pod \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.458504 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mgdl\" (UniqueName: \"kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl\") pod \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\" (UID: \"b8ff4f9c-ed9f-4177-9efe-72451eee9693\") " Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.458900 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host" (OuterVolumeSpecName: "host") pod "b8ff4f9c-ed9f-4177-9efe-72451eee9693" (UID: "b8ff4f9c-ed9f-4177-9efe-72451eee9693"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.460745 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8ff4f9c-ed9f-4177-9efe-72451eee9693-host\") on node \"crc\" DevicePath \"\"" Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.483738 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl" (OuterVolumeSpecName: "kube-api-access-4mgdl") pod "b8ff4f9c-ed9f-4177-9efe-72451eee9693" (UID: "b8ff4f9c-ed9f-4177-9efe-72451eee9693"). InnerVolumeSpecName "kube-api-access-4mgdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:06:09 crc kubenswrapper[4688]: I0930 19:06:09.562919 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mgdl\" (UniqueName: \"kubernetes.io/projected/b8ff4f9c-ed9f-4177-9efe-72451eee9693-kube-api-access-4mgdl\") on node \"crc\" DevicePath \"\"" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.048347 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.224400 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.236647 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.269431 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.310145 4688 scope.go:117] "RemoveContainer" containerID="acbab93d6e258f4585cd20acf42a347b97e56c818652a45ea829a243787ea4ae" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.310192 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/crc-debug-hkrhv" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.412805 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/pull/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.420268 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/extract/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.453824 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_36df904cc6394562a9b75ce933f9d6f1f2f71c490f54b16f2e7f86f4adbnv9m_1aa09f7c-6338-4950-97d8-13e3627ce120/util/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.570989 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-zrrc8_b4b7968e-2dbf-463c-addf-f79bb82c5fb3/kube-rbac-proxy/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.668310 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-zrrc8_b4b7968e-2dbf-463c-addf-f79bb82c5fb3/manager/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.684705 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6ng88_ef1716f0-3b35-433b-8ad9-5127c151a43f/kube-rbac-proxy/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.782580 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-6ng88_ef1716f0-3b35-433b-8ad9-5127c151a43f/manager/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.836025 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-xnhgk_08d7f313-71b6-4d2e-b67d-c9e1ddaca02d/manager/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.838642 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-xnhgk_08d7f313-71b6-4d2e-b67d-c9e1ddaca02d/kube-rbac-proxy/0.log" Sep 30 19:06:10 crc kubenswrapper[4688]: I0930 19:06:10.996448 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-98rhq_1bea0309-4386-4a05-a980-1104ec2e37c2/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.079431 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-98rhq_1bea0309-4386-4a05-a980-1104ec2e37c2/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.186408 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-vgshh_7c01591d-ed02-4a38-b0ec-560d9b41cc8c/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.202034 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-vgshh_7c01591d-ed02-4a38-b0ec-560d9b41cc8c/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.287657 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-bklgp_8a89103b-24b0-456d-88c5-f6abd68e75fb/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.370991 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-bklgp_8a89103b-24b0-456d-88c5-f6abd68e75fb/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.442181 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ff4f9c-ed9f-4177-9efe-72451eee9693" path="/var/lib/kubelet/pods/b8ff4f9c-ed9f-4177-9efe-72451eee9693/volumes" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.444089 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-zslc4_8eba803d-c206-4418-a0b6-8594af2670aa/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.593063 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-7l8bg_940175df-0d37-4540-b0c7-a81bc3261ba6/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.615257 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-zslc4_8eba803d-c206-4418-a0b6-8594af2670aa/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.632235 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-7l8bg_940175df-0d37-4540-b0c7-a81bc3261ba6/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.766575 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-t64tg_505c7e1c-5b6c-4736-8a1f-93ed5a92b47e/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.901656 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-t64tg_505c7e1c-5b6c-4736-8a1f-93ed5a92b47e/manager/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.950189 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-7tgsx_398df167-3c2c-416f-b77c-baafa00ce747/kube-rbac-proxy/0.log" Sep 30 19:06:11 crc kubenswrapper[4688]: I0930 19:06:11.992644 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-7tgsx_398df167-3c2c-416f-b77c-baafa00ce747/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.059849 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-886sw_f9796897-79b7-4273-8f85-f3c064a465af/kube-rbac-proxy/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.114067 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-886sw_f9796897-79b7-4273-8f85-f3c064a465af/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.218115 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-h8tzm_16f6e3cb-02d6-4820-953b-a728440b812f/kube-rbac-proxy/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.340063 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-h8tzm_16f6e3cb-02d6-4820-953b-a728440b812f/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.343847 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-q5q9z_ef369140-3b22-4cd7-926f-525ebde98f1a/kube-rbac-proxy/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.501027 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-q5q9z_ef369140-3b22-4cd7-926f-525ebde98f1a/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.513623 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-69zkg_f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d/kube-rbac-proxy/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.568322 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-69zkg_f2e1e60e-0a1d-45a1-a528-b9ddd89e3f1d/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.647898 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-m24qn_145c6edc-16b0-47ec-846b-638e354aa1dd/kube-rbac-proxy/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.690677 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-m24qn_145c6edc-16b0-47ec-846b-638e354aa1dd/manager/0.log" Sep 30 19:06:12 crc kubenswrapper[4688]: I0930 19:06:12.974922 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-89bc4fb57-2x7jt_fc9e0382-183b-41ab-8890-9ad95c9ebba4/kube-rbac-proxy/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.074034 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77647d9b49-6c75r_0e55afb9-bd88-450e-bb6a-5cba368b3f05/kube-rbac-proxy/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.306462 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-f55sl_0d838e78-fbd6-46d0-a298-efa17c407500/registry-server/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.337705 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-77647d9b49-6c75r_0e55afb9-bd88-450e-bb6a-5cba368b3f05/operator/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.523207 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5ddfc99c65-m7slc_9e74b6a2-2c57-4514-9709-9db9b0a76d55/kube-rbac-proxy/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.629503 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5ddfc99c65-m7slc_9e74b6a2-2c57-4514-9709-9db9b0a76d55/manager/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.699049 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-rd5sx_70e2e81b-be62-472a-8b70-d5aadace742e/kube-rbac-proxy/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.735326 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-rd5sx_70e2e81b-be62-472a-8b70-d5aadace742e/manager/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.896096 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-hxqmr_a01178d9-6b72-442e-b455-e5625b5e154e/operator/0.log" Sep 30 19:06:13 crc kubenswrapper[4688]: I0930 19:06:13.977517 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-wkxsn_0768b406-8010-4804-8769-634ee21211ed/kube-rbac-proxy/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.060522 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-wkxsn_0768b406-8010-4804-8769-634ee21211ed/manager/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.121831 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-89bc4fb57-2x7jt_fc9e0382-183b-41ab-8890-9ad95c9ebba4/manager/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.144997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-g6d6b_e2803e31-92d8-4071-8a82-84249fc01718/kube-rbac-proxy/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.276633 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-g6d6b_e2803e31-92d8-4071-8a82-84249fc01718/manager/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.286199 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-cdx5c_26fa1afc-ed5f-4541-aab0-e003a11f653c/kube-rbac-proxy/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.320345 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-cdx5c_26fa1afc-ed5f-4541-aab0-e003a11f653c/manager/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.403933 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-f2nzh_19e2aa18-5fe3-4451-8ca1-303b87a5e131/kube-rbac-proxy/0.log" Sep 30 19:06:14 crc kubenswrapper[4688]: I0930 19:06:14.451796 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-f2nzh_19e2aa18-5fe3-4451-8ca1-303b87a5e131/manager/0.log" Sep 30 19:06:15 crc kubenswrapper[4688]: I0930 19:06:15.433279 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:06:15 crc kubenswrapper[4688]: E0930 19:06:15.433813 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:06:26 crc kubenswrapper[4688]: I0930 19:06:26.429191 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:06:26 crc kubenswrapper[4688]: E0930 19:06:26.429872 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:06:28 crc kubenswrapper[4688]: I0930 19:06:28.757080 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cfd84_6f650a2d-90e1-465d-bc6b-1451aa0099ac/control-plane-machine-set-operator/0.log" Sep 30 19:06:28 crc kubenswrapper[4688]: I0930 19:06:28.916918 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tfh56_8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda/kube-rbac-proxy/0.log" Sep 30 19:06:28 crc kubenswrapper[4688]: I0930 19:06:28.947024 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tfh56_8f722bd5-bc8b-40a1-a0a8-fcb77edc2eda/machine-api-operator/0.log" Sep 30 19:06:40 crc kubenswrapper[4688]: I0930 19:06:40.429282 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:06:40 crc kubenswrapper[4688]: E0930 19:06:40.430947 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jggn4_openshift-machine-config-operator(f607cccb-8e9d-47ff-8c93-6f4a791fa36b)\"" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" Sep 30 19:06:41 crc kubenswrapper[4688]: I0930 19:06:41.040522 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wgz84_c9ab9762-a8e2-41f3-a6ed-e6a47c6fdc15/cert-manager-controller/0.log" Sep 30 19:06:41 crc kubenswrapper[4688]: I0930 19:06:41.214093 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-jzzz7_f6514521-cd09-4a1e-b627-62e0703af4e6/cert-manager-cainjector/0.log" Sep 30 19:06:41 crc kubenswrapper[4688]: I0930 19:06:41.292685 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-tj7k4_cf0898fe-428e-49df-bafd-a45e3f38d476/cert-manager-webhook/0.log" Sep 30 19:06:52 crc kubenswrapper[4688]: I0930 19:06:52.697010 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-wpjgb_5c329119-f853-41a4-89de-0f5358bc030d/nmstate-console-plugin/0.log" Sep 30 19:06:52 crc kubenswrapper[4688]: I0930 19:06:52.888761 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-44mqf_3510490b-a2cb-45a6-9bce-e671a256cde6/nmstate-handler/0.log" Sep 30 19:06:52 crc kubenswrapper[4688]: I0930 19:06:52.985087 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-lzq79_b3c49134-fd6f-43f7-b097-32d0e1384e5f/nmstate-metrics/0.log" Sep 30 19:06:52 crc kubenswrapper[4688]: I0930 19:06:52.993677 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-lzq79_b3c49134-fd6f-43f7-b097-32d0e1384e5f/kube-rbac-proxy/0.log" Sep 30 19:06:53 crc kubenswrapper[4688]: I0930 19:06:53.175968 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-hmwrd_4e9693f8-bd8e-4ed6-8b39-0d7b926133c6/nmstate-operator/0.log" Sep 30 19:06:53 crc kubenswrapper[4688]: I0930 19:06:53.179680 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-j4lbr_5391ab76-cadc-44fb-b404-f1dd66d8f5f4/nmstate-webhook/0.log" Sep 30 19:06:54 crc kubenswrapper[4688]: I0930 19:06:54.429750 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:06:54 crc kubenswrapper[4688]: I0930 19:06:54.738154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"eba62e61068f6df6206a93fbe79b5adc6614c0d8bfd5cb3eb8cd15c7e8f276c0"} Sep 30 19:07:06 crc kubenswrapper[4688]: I0930 19:07:06.772382 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-5l9vw_e73cf014-002c-4642-815a-8a77f2ec12ee/kube-rbac-proxy/0.log" Sep 30 19:07:06 crc kubenswrapper[4688]: I0930 19:07:06.938408 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-5l9vw_e73cf014-002c-4642-815a-8a77f2ec12ee/controller/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.000602 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.183888 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.192251 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.206661 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.221667 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.363129 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.395557 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.443194 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.466302 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.563409 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-reloader/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.563421 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-frr-files/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.624743 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/cp-metrics/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.633986 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/controller/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.739046 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/frr-metrics/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.800272 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/kube-rbac-proxy/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.825461 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/kube-rbac-proxy-frr/0.log" Sep 30 19:07:07 crc kubenswrapper[4688]: I0930 19:07:07.926506 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/reloader/0.log" Sep 30 19:07:08 crc kubenswrapper[4688]: I0930 19:07:08.012960 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-v9pk8_ce858a63-686b-4a1c-98a6-de91bb6992e1/frr-k8s-webhook-server/0.log" Sep 30 19:07:08 crc kubenswrapper[4688]: I0930 19:07:08.228372 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d94b69f58-xr5pv_dfdf8b45-1929-49f8-95ef-94751966a5c3/manager/0.log" Sep 30 19:07:08 crc kubenswrapper[4688]: I0930 19:07:08.361332 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85b8f7c4f8-d477v_0f7203cd-f438-40a7-b09c-210c6afa21ff/webhook-server/0.log" Sep 30 19:07:08 crc kubenswrapper[4688]: I0930 19:07:08.496162 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfrp4_9936dd27-23ea-412c-98b4-dab61a77b58b/kube-rbac-proxy/0.log" Sep 30 19:07:09 crc kubenswrapper[4688]: I0930 19:07:09.013618 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pfrp4_9936dd27-23ea-412c-98b4-dab61a77b58b/speaker/0.log" Sep 30 19:07:09 crc kubenswrapper[4688]: I0930 19:07:09.393919 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs79f_4b5cdacf-945b-48ab-8105-9aed7ba409e2/frr/0.log" Sep 30 19:07:19 crc kubenswrapper[4688]: I0930 19:07:19.937621 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.145509 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.168341 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.183077 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.317321 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/extract/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.322808 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/util/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.336254 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bcvx8nv_add2c7c8-faaa-40f0-afce-74fa53bb3662/pull/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.483970 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.687702 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.695135 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.697494 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.823211 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-utilities/0.log" Sep 30 19:07:20 crc kubenswrapper[4688]: I0930 19:07:20.848610 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/extract-content/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.026203 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.316662 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.334060 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.364872 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.519483 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-utilities/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.596508 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/extract-content/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.763776 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2ndvz_cd1d1772-cc14-4494-b99c-78ffa31903a0/registry-server/0.log" Sep 30 19:07:21 crc kubenswrapper[4688]: I0930 19:07:21.821198 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.028610 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.048978 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.116963 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.271960 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/util/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.286637 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/pull/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.311460 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96vz8nx_8856e90e-bbf3-4c5b-a038-0380a7b7c70d/extract/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.465883 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ddkjs_f4669830-b238-42f8-8824-9aced2d9ca0a/marketplace-operator/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.607700 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dfp85_437e2975-7afe-4f5b-9dbd-90467f7ccc49/registry-server/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.653370 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.808443 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.811675 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:07:22 crc kubenswrapper[4688]: I0930 19:07:22.865928 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.023299 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-content/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.048942 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/extract-utilities/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.246255 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.260015 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n8lw2_1d7d14e9-8e91-4e6e-9043-7cfdea4ebca5/registry-server/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.409902 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.423779 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.443563 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.597652 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-content/0.log" Sep 30 19:07:23 crc kubenswrapper[4688]: I0930 19:07:23.609425 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/extract-utilities/0.log" Sep 30 19:07:24 crc kubenswrapper[4688]: I0930 19:07:24.511156 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wjxnj_558a9e5b-0ede-4aca-9dbd-560ea9578720/registry-server/0.log" Sep 30 19:07:46 crc kubenswrapper[4688]: E0930 19:07:46.969224 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.192:33868->38.129.56.192:40727: write tcp 38.129.56.192:33868->38.129.56.192:40727: write: broken pipe Sep 30 19:09:22 crc kubenswrapper[4688]: I0930 19:09:22.546220 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:09:22 crc kubenswrapper[4688]: I0930 19:09:22.547142 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:09:40 crc kubenswrapper[4688]: I0930 19:09:40.426719 4688 generic.go:334] "Generic (PLEG): container finished" podID="e7daf765-a0e3-45c8-9be0-81ca0b58bdee" containerID="833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc" exitCode=0 Sep 30 19:09:40 crc kubenswrapper[4688]: I0930 19:09:40.426818 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tf84h/must-gather-mmssn" event={"ID":"e7daf765-a0e3-45c8-9be0-81ca0b58bdee","Type":"ContainerDied","Data":"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc"} Sep 30 19:09:40 crc kubenswrapper[4688]: I0930 19:09:40.429044 4688 scope.go:117] "RemoveContainer" containerID="833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc" Sep 30 19:09:40 crc kubenswrapper[4688]: I0930 19:09:40.535589 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tf84h_must-gather-mmssn_e7daf765-a0e3-45c8-9be0-81ca0b58bdee/gather/0.log" Sep 30 19:09:52 crc kubenswrapper[4688]: I0930 19:09:52.546048 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:09:52 crc kubenswrapper[4688]: I0930 19:09:52.546778 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:09:53 crc kubenswrapper[4688]: I0930 19:09:53.836485 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tf84h/must-gather-mmssn"] Sep 30 19:09:53 crc kubenswrapper[4688]: I0930 19:09:53.837414 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tf84h/must-gather-mmssn" podUID="e7daf765-a0e3-45c8-9be0-81ca0b58bdee" containerName="copy" containerID="cri-o://a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9" gracePeriod=2 Sep 30 19:09:53 crc kubenswrapper[4688]: I0930 19:09:53.845986 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tf84h/must-gather-mmssn"] Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.282169 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tf84h_must-gather-mmssn_e7daf765-a0e3-45c8-9be0-81ca0b58bdee/copy/0.log" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.282939 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.358056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8z65\" (UniqueName: \"kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65\") pod \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.358217 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output\") pod \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\" (UID: \"e7daf765-a0e3-45c8-9be0-81ca0b58bdee\") " Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.363722 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65" (OuterVolumeSpecName: "kube-api-access-q8z65") pod "e7daf765-a0e3-45c8-9be0-81ca0b58bdee" (UID: "e7daf765-a0e3-45c8-9be0-81ca0b58bdee"). InnerVolumeSpecName "kube-api-access-q8z65". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.460795 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8z65\" (UniqueName: \"kubernetes.io/projected/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-kube-api-access-q8z65\") on node \"crc\" DevicePath \"\"" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.544196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e7daf765-a0e3-45c8-9be0-81ca0b58bdee" (UID: "e7daf765-a0e3-45c8-9be0-81ca0b58bdee"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.562972 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e7daf765-a0e3-45c8-9be0-81ca0b58bdee-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.577029 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tf84h_must-gather-mmssn_e7daf765-a0e3-45c8-9be0-81ca0b58bdee/copy/0.log" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.578244 4688 generic.go:334] "Generic (PLEG): container finished" podID="e7daf765-a0e3-45c8-9be0-81ca0b58bdee" containerID="a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9" exitCode=143 Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.578293 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tf84h/must-gather-mmssn" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.578332 4688 scope.go:117] "RemoveContainer" containerID="a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.600249 4688 scope.go:117] "RemoveContainer" containerID="833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.669246 4688 scope.go:117] "RemoveContainer" containerID="a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9" Sep 30 19:09:54 crc kubenswrapper[4688]: E0930 19:09:54.669728 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9\": container with ID starting with a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9 not found: ID does not exist" containerID="a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.669780 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9"} err="failed to get container status \"a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9\": rpc error: code = NotFound desc = could not find container \"a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9\": container with ID starting with a4facb731b6af9fe9280a7add8440ccf87a2db50bc4e3c49de6072a14a6b74f9 not found: ID does not exist" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.669805 4688 scope.go:117] "RemoveContainer" containerID="833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc" Sep 30 19:09:54 crc kubenswrapper[4688]: E0930 19:09:54.671819 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc\": container with ID starting with 833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc not found: ID does not exist" containerID="833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc" Sep 30 19:09:54 crc kubenswrapper[4688]: I0930 19:09:54.671970 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc"} err="failed to get container status \"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc\": rpc error: code = NotFound desc = could not find container \"833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc\": container with ID starting with 833cd5fff1475a0910115445095f35fd171e75d03a6ae4614ded9843a1df7dcc not found: ID does not exist" Sep 30 19:09:55 crc kubenswrapper[4688]: I0930 19:09:55.439902 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7daf765-a0e3-45c8-9be0-81ca0b58bdee" path="/var/lib/kubelet/pods/e7daf765-a0e3-45c8-9be0-81ca0b58bdee/volumes" Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.545887 4688 patch_prober.go:28] interesting pod/machine-config-daemon-jggn4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.546740 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.546830 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.548096 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eba62e61068f6df6206a93fbe79b5adc6614c0d8bfd5cb3eb8cd15c7e8f276c0"} pod="openshift-machine-config-operator/machine-config-daemon-jggn4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.548209 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" podUID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerName="machine-config-daemon" containerID="cri-o://eba62e61068f6df6206a93fbe79b5adc6614c0d8bfd5cb3eb8cd15c7e8f276c0" gracePeriod=600 Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.877963 4688 generic.go:334] "Generic (PLEG): container finished" podID="f607cccb-8e9d-47ff-8c93-6f4a791fa36b" containerID="eba62e61068f6df6206a93fbe79b5adc6614c0d8bfd5cb3eb8cd15c7e8f276c0" exitCode=0 Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.878002 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerDied","Data":"eba62e61068f6df6206a93fbe79b5adc6614c0d8bfd5cb3eb8cd15c7e8f276c0"} Sep 30 19:10:22 crc kubenswrapper[4688]: I0930 19:10:22.878211 4688 scope.go:117] "RemoveContainer" containerID="d4b5e8dc22ede14f32f16376688fae021fa4c945090df230515a726865926d35" Sep 30 19:10:23 crc kubenswrapper[4688]: I0930 19:10:23.898410 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jggn4" event={"ID":"f607cccb-8e9d-47ff-8c93-6f4a791fa36b","Type":"ContainerStarted","Data":"9ba51d7a7e7ce3ddae4c990ceefe7c788c4db4e5c24b32708f64cbb28561ea3e"}